nep-big New Economics Papers
on Big Data
Issue of 2019‒12‒16
twenty papers chosen by
Tom Coupé
University of Canterbury

  1. Boosting: Why you Can Use the HP Filter By Peter C.B. Phillips; Zhentao Shi
  2. Financial Time Series Forecasting with Deep Learning : A Systematic Literature Review: 2005-2019 By Omer Berat Sezer; Mehmet Ugur Gudelek; Ahmet Murat Ozbayoglu
  3. Measuring Founding Strategy By Guzman, Jorge; Li, Aishen
  4. The AI Techno-Economic Segment Analysis By Giuditta De Prato; Montserrat Lopez Cobo; Sofia Samoili; Riccardo Righi; Miguel Vazquez Prada Baillet; Melisande Cardona
  5. Dynamic Portfolio Management with Reinforcement Learning By Junhao Wang; Yinheng Li; Yijie Cao
  6. Machine Learning et nouvelles sources de données pour le scoring de crédit By Christophe Hurlin; Christophe Pérignon
  7. A Machine Learning Approach to Adaptive Robust Utility Maximization and Hedging By Tao Chen; Michael Ludkovski
  8. Merger Policy in Digital Markets: An Ex-Post Assessment By Elena Argentesi; Paolo Buccirossi; Emilio Calvano; Tomaso Duso; Alessia Marrazzo; Salvatore Nava
  9. High-Dimensional Forecasting in the Presence of Unit Roots and Cointegration By Stephan Smeekes; Etienne Wijler
  10. Financial Market Directional Forecasting With Stacked Denoising Autoencoder By Shaogao Lv; Yongchao Hou; Hongwei Zhou
  11. A comparison of machine learning model validation schemes for non-stationary time series data By Schnaubelt, Matthias
  12. The Hardware–Software Model: A New Conceptual Framework of Production, R&D, and Growth with AI By Jakub Growiec
  13. An exploratory study of populism: the municipality-level predictors of electoral outcomes in Italy By Levi, Eugenio; Patriarca, Fabrizio
  14. Introduction to Solving Quant Finance Problems with Time-Stepped FBSDE and Deep Learning By Bernhard Hientzsch
  15. Applications of the Deep Galerkin Method to Solving Partial Integro-Differential and Hamilton-Jacobi-Bellman Equations By Ali Al-Aradi; Adolfo Correia; Danilo de Frietas Naiff; Gabriel Jardim; Yuri Saporito
  16. Refinements of Barndorff-Nielsen and Shephard model: an analysis of crude oil price with machine learning By Indranil SenGupta; William Nganje; Erik Hanson
  17. Neural network for pricing and universal static hedging of contingent claims By Vikranth Lokeshwar; Vikram Bhardawaj; Shashi Jain
  18. Intérêt des adhérents d'une mutuelle pour des services utilisant leurs données personnelles dans le cadre de la médecine personnalisée By Bénédicte H. Apouey
  19. Deep Unsupervised 4D Seismic 3D Time-Shift Estimation with Convolutional Neural Networks By Dramsch, Jesper Sören; Christensen, Anders Nymark; MacBeth, Colin; Lüthje, Mikael
  20. U-CNNpred: A Universal CNN-based Predictor for Stock Markets By Ehsan Hoseinzade; Saman Haratizadeh; Arash Khoeini

  1. By: Peter C.B. Phillips (Cowles Foundation, Yale University); Zhentao Shi (The Chinese University of Hong Kong)
    Abstract: The Hodrick-Prescott (HP) ï¬ lter is one of the most widely used econometric methods in applied macroeconomic research. The technique is nonparametric and seeks to decompose a time series into a trend and a cyclical component unaided by economic theory or prior trend speciï¬ cation. Like all nonparametric methods, the HP ï¬ lter depends critically on a tuning parameter that controls the degree of smoothing. Yet in contrast to modern nonparametric methods and applied work with these procedures, empirical practice with the HP ï¬ lter almost universally relies on standard settings for the tuning parameter that have been suggested largely by experimentation with macroeconomic data and heuristic reasoning about the form of economic cycles and trends. As recent research \citep{phillips2015business} has shown, standard settings may not be adequate in removing trends, particularly stochastic trends, in economic data. This paper proposes an easy-to-implement practical procedure of iterating the HP smoother that is intended to make the ï¬ lter a smarter smoothing device for trend estimation and trend elimination. We call this iterated HP technique the \emph{boosted HP ï¬ lter} in view of its connection to $L_{2}$-boosting in machine learning. The paper develops limit theory to show that the boosted HP (bHP) ï¬ lter asymptotically recovers trend mechanisms that involve unit root processes, deterministic polynomial drifts, and polynomial drifts with structural breaks, thereby covering the most common trends that appear in macroeconomic data and current modeling methodology. In doing so, the boosted ï¬ lter provides a new mechanism for consistently estimating multiple structural breaks even without knowledge of the number of such breaks. A stopping criterion is used to automate the iterative HP algorithm, making it a data-determined method that is ready for modern data-rich environments in economic research. The methodology is illustrated using three real data examples that highlight the differences between simple HP ï¬ ltering, the data-determined boosted ï¬ lter, and an alternative autoregressive approach. These examples show that the bHP ï¬ lter is helpful in analyzing a large collection of heterogeneous macroeconomic time series that manifest various degrees of persistence, trend behavior, and volatility.
    Keywords: Boosting, Cycles, Empirical macroeconomics, Hodrick-Prescott filter, Machine learning, Nonstationary time series, Trends, Unit root processes
    JEL: C22 C55 E20
    Date: 2019–12
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:2212&r=all
  2. By: Omer Berat Sezer; Mehmet Ugur Gudelek; Ahmet Murat Ozbayoglu
    Abstract: Financial time series forecasting is, without a doubt, the top choice of computational intelligence for finance researchers from both academia and financial industry due to its broad implementation areas and substantial impact. Machine Learning (ML) researchers came up with various models and a vast number of studies have been published accordingly. As such, a significant amount of surveys exist covering ML for financial time series forecasting studies. Lately, Deep Learning (DL) models started appearing within the field, with results that significantly outperform traditional ML counterparts. Even though there is a growing interest in developing models for financial time series forecasting research, there is a lack of review papers that were solely focused on DL for finance. Hence, our motivation in this paper is to provide a comprehensive literature review on DL studies for financial time series forecasting implementations. We not only categorized the studies according to their intended forecasting implementation areas, such as index, forex, commodity forecasting, but also grouped them based on their DL model choices, such as Convolutional Neural Networks (CNNs), Deep Belief Networks (DBNs), Long-Short Term Memory (LSTM). We also tried to envision the future for the field by highlighting the possible setbacks and opportunities, so the interested researchers can benefit.
    Date: 2019–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1911.13288&r=all
  3. By: Guzman, Jorge; Li, Aishen
    Abstract: We propose an approach to measure strategy using text-based machine learning. The key insight is that distance in the statements made by companies can be partially indicative of their strategic positioning with respect to each other. We formalize this insight by proposing a new measure of strategic positioning---the strategy score---and defining the assumptions and conditions under which we can estimate it empirically. We then implement this approach to score the strategic positioning of a large sample of startups in Crunchbase in relation to contemporaneous public companies. Startups with a higher founding strategy score have higher equity outcomes, reside in locations with more venture capital, and receive a higher amount of financing in seed financing events. One implication of this result is that founding strategic positioning is important for startup performance.
    Date: 2019–11–22
    URL: http://d.repec.org/n?u=RePEc:osf:socarx:7cvge&r=all
  4. By: Giuditta De Prato (European Commission - JRC); Montserrat Lopez Cobo (European Commission - JRC); Sofia Samoili (European Commission - JRC); Riccardo Righi (European Commission - JRC); Miguel Vazquez Prada Baillet (European Commission - JRC); Melisande Cardona (European Commission - JRC)
    Abstract: The Techno-Economics Segment (TES) analytical approach aims to offer a timely representation of an integrated and very dynamic technological domain not captured by official statistics or standard classifications. Domains of that type, such as photonics and artificial intelligence (AI), are rapidly evolving and expected to play a key role in the digital transformation, enabling further developments. They are therefore policy relevant and it is important to have available a methodology and tools suitable to map their geographic presence, technological development, economic impact, and overall evolution. The TES approach was developed by the JRC. It provides quantitative analyses in a micro-based perspective. AI has become an area of strategic importance with potential to be a key driver of economic development. The Commission announced in April 2018 a European strategy on AI in its communication "Artificial Intelligence for Europe", COM(2018)237, and in December a Coordinated Action Plan, COM(2018)795. In order to provide quantitative evidences for monitoring AI technologies in the worldwide economies, the TES approach is applied to AI in the present study. The general aim of this work is to provide an analysis of the AI techno-economic complex system, addressing the following three fundamental research questions: (i) Which are the economic players involved in the research and development as well as in the production and commercialisation of AI goods and services? And where are they located? (ii) Which specific technological areas (under the large umbrella of AI) have these players been working at? (iii) How is the network resulting from their collaboration shaped and what collaborations have they been developing? This report addresses these research questions throughout its different sections, providing both an overview of the AI landscape and a deep understanding of the structure of the socio-economic system, offering useful insights for possible policy initiatives. This is even more relevant and challenging as the considered technologies are consolidating and introducing deep changes in the economy and the society. From this perspective, the goal of this report is to draw a detailed map of the considered ecosystem, and to analyse it in a multidimensional way, while keeping the policy perspective in mind. The period considered in our analysis covers from 2009 to 2018. We detected close to 58,000 relevant documents and, identified 34,000 players worldwide involved in AI-related economic processes. We collected and processed information regarding these players to set up a basis from which the exploration of the ecosystem can take multiple directions depending on the targeted objective. In this report, we present indicators regarding three dimensions of analysis: (i) the worldwide landscape overview, (ii) the involvement of players in specific AI technological sub-domains, and (iii) the activities and the collaborations in AI R&D processes. These are just some of the dimensions that can be investigated with the TES approach. We are currently including and analysing additional ones.
    Keywords: TES, TECHNO ECONOMIC SEGMENT, AI, ARTIFICIAL INTELLIGENCE, PREDICT, ICT R&D, DIGITAL TRANSFORMATION , DIGITAL ECONOMY, INNOVATION
    Date: 2019–11
    URL: http://d.repec.org/n?u=RePEc:ipt:iptwpa:jrc118071&r=all
  5. By: Junhao Wang; Yinheng Li; Yijie Cao
    Abstract: Dynamic Portfolio Management is a domain that concerns the continuous redistribution of assets within a portfolio to maximize the total return in a given period of time. With the recent advancement in machine learning and artificial intelligence, many efforts have been put in designing and discovering efficient algorithmic ways to manage the portfolio. This paper presents two different reinforcement learning agents, policy gradient actor-critic and evolution strategy. The performance of the two agents is compared during backtesting. We also discuss the problem set up from state space design, to state value function approximator and policy control design. We include the short position to give the agent more flexibility during assets redistribution and a constant trading cost of 0.25%. The agent is able to achieve 5% return in 10 days daily trading despite 0.25% trading cost.
    Date: 2019–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1911.11880&r=all
  6. By: Christophe Hurlin (LEO - Laboratoire d'Économie d'Orleans - CNRS - Centre National de la Recherche Scientifique - Université de Tours - UO - Université d'Orléans); Christophe Pérignon (GREGH - Groupement de Recherche et d'Etudes en Gestion à HEC - HEC Paris - Ecole des Hautes Etudes Commerciales - CNRS - Centre National de la Recherche Scientifique)
    Abstract: In this article, we discuss the contribution of Machine Learning techniques and new data sources (New Data) to credit-risk modelling. Credit scoring was historically one of the first fields of application of Machine Learning techniques. Today, these techniques permit to exploit new sources of data made available by the digitalization of customer relationships and social networks. The combination of the emergence of new methodologies and new data has structurally changed the credit industry and favored the emergence of new players. First, we analyse the incremental contribution of Machine Learning techniques per se. We show that they lead to significant productivity gains but that the forecasting improvement remains modest. Second, we quantify the contribution of the "datadiversity", whether or not these new data are exploited through Machine Learning. It appears that some of these data contain weak signals that significantly improve the quality of the assessment of borrowers' creditworthiness. At the microeconomic level, these new approaches promote financial inclusion and access to credit for the most vulnerable borrowers. However, Machine Learning applied to these data can also lead to severe biases and discrimination.
    Abstract: Dans cet article, nous proposons une réflexion sur l'apport des techniques d'apprentissage automatique (Machine Learning) et des nouvelles sources de données (New Data) pour la modélisation du risque de crédit. Le scoring de crédit fut historiquement l'un des premiers champs d'application des techniques de Machine Learning. Aujourd'hui, ces techniques permettent d'exploiter de « nouvelles » données rendues disponibles par la digitalisation de la relation clientèle et les réseaux sociaux. La conjonction de l'émergence de nouvelles méthodologies et de nouvelles données a ainsi modifié de façon structurelle l'industrie du crédit et favorisé l'émergence de nouveaux acteurs. Premièrement, nous analysons l'apport des algorithmes de Machine Learning à ensemble d'information constant. Nous montrons qu'il existe des gains de productivité liés à ces nouvelles approches mais que les gains de prévision du risque de crédit restent en revanche modestes. Deuxièmement, nous évaluons l'apport de cette « datadiversité », que ces nouvelles données soient exploitées ou non par des techniques de Machine Learning. Il s'avère que certaines de ces données permettent de révéler des signaux faibles qui améliorent sensiblement la qualité de l'évaluation de la solvabilité des emprunteurs. Au niveau microéconomique, ces nouvelles approches favorisent l'inclusion financière et l'accès au crédit des emprunteurs les plus fragiles. Cependant, le Machine Learning appliqué à ces données peut aussi conduire à des biais et à des phénomènes de discrimination.
    Keywords: Machine Learning ML,Credit scoring,New data,Nouvelles données,Scoring de crédit,Apprentissage automatique
    Date: 2019–11–21
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:halshs-02377886&r=all
  7. By: Tao Chen; Michael Ludkovski
    Abstract: We investigate the adaptive robust control framework for portfolio optimization and loss-based hedging under drift and volatility uncertainty. Adaptive robust problems offer many advantages but require handling a double optimization problem (infimum over market measures, supremum over the control) at each instance. Moreover, the underlying Bellman equations are intrinsically multi-dimensional. We propose a novel machine learning approach that solves for the local saddle-point at a chosen set of inputs and then uses a nonparametric (Gaussian process) regression to obtain a functional representation of the value function. Our algorithm resembles control randomization and regression Monte Carlo techniques but also brings multiple innovations, including adaptive experimental design, separate surrogates for optimal control and the local worst-case measure, and computational speed-ups for the sup-inf optimization. Thanks to the new scheme we are able to consider settings that have been previously computationally intractable and provide several new financial insights about learning and optimal trading under unknown market parameters. In particular, we demonstrate the financial advantages of adaptive robust framework compared to adaptive and static robust alternatives.
    Date: 2019–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1912.00244&r=all
  8. By: Elena Argentesi; Paolo Buccirossi; Emilio Calvano; Tomaso Duso; Alessia Marrazzo; Salvatore Nava
    Abstract: This paper presents a broad retrospective evaluation of mergers and merger decisions in the digital sector. We first discuss the most crucial features of digital markets such as network effects, multi-sidedness, big data, and rapid innovation that create important challenges for competition policy. We show that these features have been key determinants of the theories of harm in major merger cases in the past few years. We then analyse the characteristics of almost 300 acquisitions carried out by three major digital companies –Amazon, Facebook, and Google – between 2008 and 2018. We cluster target companies on their area of economic activity and show that they span a wide range of economic sectors. In most cases, their products and services appear to be complementary to those supplied by the acquirers. Moreover, target companies seem to be particularly young, being four-years-old or younger in nearly 60% of cases at the time of the acquisition. Finally, we examine two important merger cases, Facebook/Instagram and Google/Waze, providing a systematic assessment of the theories of harm considered by the UK competition authorities as well as evidence on the evolution of the market after the transactions were approved. We discuss whether the CAs performed complete and careful analyses to foresee the competitive consequences of the investigated mergers and whether a more effective merger control regime can be achieved within the current legal framework.
    Keywords: Digital Markets, Mergers, Network Effects, Big Data, Platforms, Ex-post, Antitrust
    JEL: L4 K21
    Date: 2019
    URL: http://d.repec.org/n?u=RePEc:diw:diwwpp:dp1836&r=all
  9. By: Stephan Smeekes; Etienne Wijler
    Abstract: We investigate how the possible presence of unit roots and cointegration affects forecasting with Big Data. As most macroeoconomic time series are very persistent and may contain unit roots, a proper handling of unit roots and cointegration is of paramount importance for macroeconomic forecasting. The high-dimensional nature of Big Data complicates the analysis of unit roots and cointegration in two ways. First, transformations to stationarity require performing many unit root tests, increasing room for errors in the classification. Second, modelling unit roots and cointegration directly is more difficult, as standard high-dimensional techniques such as factor models and penalized regression are not directly applicable to (co)integrated data and need to be adapted. We provide an overview of both issues and review methods proposed to address these issues. These methods are also illustrated with two empirical applications.
    Date: 2019–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1911.10552&r=all
  10. By: Shaogao Lv; Yongchao Hou; Hongwei Zhou
    Abstract: Forecasting stock market direction is always an amazing but challenging problem in finance. Although many popular shallow computational methods (such as Backpropagation Network and Support Vector Machine) have extensively been proposed, most algorithms have not yet attained a desirable level of applicability. In this paper, we present a deep learning model with strong ability to generate high level feature representations for accurate financial prediction. Precisely, a stacked denoising autoencoder (SDAE) from deep learning is applied to predict the daily CSI 300 index, from Shanghai and Shenzhen Stock Exchanges in China. We use six evaluation criteria to evaluate its performance compared with the back propagation network, support vector machine. The experiment shows that the underlying financial model with deep machine technology has a significant advantage for the prediction of the CSI 300 index.
    Date: 2019–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1912.00712&r=all
  11. By: Schnaubelt, Matthias
    Abstract: Machine learning is increasingly applied to time series data, as it constitutes an attractive alternative to forecasts based on traditional time series models. For independent and identically distributed observations, cross-validation is the prevalent scheme for estimating out-of-sample performance in both model selection and assessment. For time series data, however, it is unclear whether forwardvalidation schemes, i.e., schemes that keep the temporal order of observations, should be preferred. In this paper, we perform a comprehensive empirical study of eight common validation schemes. We introduce a study design that perturbs global stationarity by introducing a slow evolution of the underlying data-generating process. Our results demonstrate that, even for relatively small perturbations, commonly used cross-validation schemes often yield estimates with the largest bias and variance, and forward-validation schemes yield better estimates of the out-of-sample error. We provide an interpretation of these results in terms of an additional evolution-induced bias and the sample-size dependent estimation error. Using a large-scale financial data set, we demonstrate the practical significance in a replication study of a statistical arbitrage problem. We conclude with some general guidelines on the selection of suitable validation schemes for time series data.
    Keywords: machine learning,model selection,model validation,time series,cross-validation
    Date: 2019
    URL: http://d.repec.org/n?u=RePEc:zbw:iwqwdp:112019&r=all
  12. By: Jakub Growiec (Department of Quantitative Economics, Warsaw School of Economics, Poland; Rimini Centre for Economic Analysis)
    Abstract: The article proposes a new conceptual framework for capturing production, R&D, and economic growth in aggregative economic models which extend their horizon into the digital era. Two key factors of production are considered: hardware, including physical labor, traditional physical capital and programmable hardware, and software, encompassing human cognitive work and pre-programmed software, including artificial intelligence (AI). Hardware and software are complementary in production whereas their constituent components are mutually substitutable. The framework generalizes, among others, the standard model of production with capital and labor, models with capital–skill complementarity and skill-biased technical change, and unified growth theories embracing also the pre-industrial period. It offers a clear conceptual distinction between mechanization and automation of production. It delivers sharp, empirically testable and economically intuitive predictions for long-run growth, the evolution of factor shares, and the direction of technical change.
    Keywords: production function, R&D equation, technological progress, complementarity, automation, artificial intelligence
    JEL: O30 O40 O41
    Date: 2019–12
    URL: http://d.repec.org/n?u=RePEc:rim:rimwps:19-18&r=all
  13. By: Levi, Eugenio; Patriarca, Fabrizio
    Abstract: We present an exploratory machine learning analysis of populist votes at municipality level in the 2018 Italian general elections, in which populist parties gained almost 50% of the votes. Starting from a comprehensive set of local characteristics, we use an algorithm based on BIC to obtain a reduced set of predictors for each of the two populist parties (Five-Star Movement and Lega) and the two traditional ones (Democratic Party and Forza Italia). Differences and similarities between the sets of predictors further provide evidence on 1) heterogeneity in populisms, 2) if this heterogeneity is related to the traditional left/right divide. The Five-Star Movement is stronger in larger and unsafer municipalities, where people are younger, more unemployed and work more in services. On the contrary, Lega thrives in smaller and safer municipalities, where people are less educated and employed more in manufacturing and commerce. These differences do not correspond to differences between the Democratic Party and Forza Italia, providing evidence that heterogeneity in populisms does not correspond to a left/right divide. As robustness tests, we use an alternative machine learning technique (lasso) and apply our predictions to France as to confront them with candidates' actual votes in 2017 presidential elections.
    Keywords: Voting,Populism,Economic insecurity,Political Economy
    JEL: D72 F52 G01 J15 O33 Z13
    Date: 2019
    URL: http://d.repec.org/n?u=RePEc:zbw:glodps:430&r=all
  14. By: Bernhard Hientzsch
    Abstract: In this introductory paper, we discuss how quantitative finance problems under some common risk factor dynamics for some common instruments and approaches can be formulated as time-continuous or time-discrete forward-backward stochastic differential equations (FBSDE) final-value or control problems, how these final value problems can be turned into control problems, how time-continuous problems can be turned into time-discrete problems, and how the forward and backward stochastic differential equations (SDE) can be time-stepped. We obtain both forward and backward time-stepped time-discrete stochastic control problems (where forward and backward indicate in which direction the Y SDE is time-stepped) that we will solve with optimization approaches using deep neural networks for the controls and stochastic gradient and other deep learning methods for the actual optimization/learning. We close with examples for the forward and backward methods for an European option pricing problem. Several methods and approaches are new.
    Date: 2019–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1911.12231&r=all
  15. By: Ali Al-Aradi; Adolfo Correia; Danilo de Frietas Naiff; Gabriel Jardim; Yuri Saporito
    Abstract: We extend the Deep Galerkin Method (DGM) introduced in Sirignano and Spiliopoulos (2018) to solve a number of partial differential equations (PDEs) that arise in the context of optimal stochastic control and mean field games. First, we consider PDEs where the function is constrained to be positive and integrate to unity, as is the case with Fokker-Planck equations. Our approach involves reparameterizing the solution as the exponential of a neural network appropriately normalized to ensure both requirements are satisfied. This then gives rise to a partial integro-differential equation (PIDE) where the integral appearing in the equation is handled using importance sampling. Secondly, we tackle a number of Hamilton-Jacobi-Bellman (HJB) equations that appear in stochastic optimal control problems. The key contribution is that these equations are approached in their unsimplified primal form which includes an optimization problem as part of the equation. We extend the DGM algorithm to solve for the value function and the optimal control simultaneously by characterizing both as deep neural networks. Training the networks is performed by taking alternating stochastic gradient descent steps for the two functions, a technique similar in spirit to policy improvement algorithms.
    Date: 2019–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1912.01455&r=all
  16. By: Indranil SenGupta; William Nganje; Erik Hanson
    Abstract: A commonly used stochastic model for derivative and commodity market analysis is the Barndorff-Nielsen and Shephard (BN-S) model. Though this model is very efficient and analytically tractable, it suffers from the absence of long range dependence and many other issues. For this paper, the analysis is restricted to crude oil price dynamics. A simple way of improving the BN-S model with the implementation of various machine learning algorithms is proposed. This refined BN-S model is more efficient and has fewer parameters than other models which are used in practice as improvements of the BN-S model. The procedure and the model show the application of data science for extracting a "deterministic component" out of processes that are usually considered to be completely stochastic. Empirical applications validate the efficacy of the proposed model for long range dependence.
    Date: 2019–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1911.13300&r=all
  17. By: Vikranth Lokeshwar; Vikram Bhardawaj; Shashi Jain
    Abstract: We present here a regress later based Monte Carlo approach that uses neural networks for pricing high-dimensional contingent claims. The choice of specific architecture of the neural networks used in the proposed algorithm provides for interpretability of the model, a feature that is often desirable in the financial context. Specifically, the interpretation leads us to demonstrate that any contingent claim -- possibly high dimensional and path-dependent -- under the Markovian and the no-arbitrage assumptions, can be semi-statically hedged using a portfolio of short maturity options. We show how the method can be used to obtain an upper and lower bound to the true price, where the lower bound is obtained by following a sub-optimal policy, while the upper bound by exploiting the dual formulation. Unlike other duality based upper bounds where one typically has to resort to nested simulation for constructing super-martingales, the martingales in the current approach come at no extra cost, without the need for any sub-simulations. We demonstrate through numerical examples the simplicity and efficiency of the method for both pricing and semi-static hedging of path-dependent options
    Date: 2019–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1911.11362&r=all
  18. By: Bénédicte H. Apouey (PSE - Paris School of Economics, PJSE - Paris Jourdan Sciences Economiques - UP1 - Université Panthéon-Sorbonne - ENS Paris - École normale supérieure - Paris - INRA - Institut National de la Recherche Agronomique - EHESS - École des hautes études en sciences sociales - ENPC - École des Ponts ParisTech - CNRS - Centre National de la Recherche Scientifique)
    Abstract: Au cours d'une enquête quantitative menée en 2016 auprès de 1 700 adhérents d'une mutuelle, nous avons mesuré l'intérêt pour différents services qui seraient proposés par la mutuelle et utiliseraient les données personnelles dans une logique de médecine personnalisée. Les répondants sont à la fois préoccupés par la confidentialité de leurs données et intéressés par leur utilisation dans un but de suivi, de prédiction et de prévention. L'intérêt est plus marqué en cas de mauvaise santé et d'inquiétude pour les vieux jours. On observe un intérêt plus faible chez les individus dont la position sociale est plus élevée, peut-être du fait de leurs ressources matérielles et culturelles et de leur préoccupation vis-à-vis des risques liés à l'utilisation des données.
    Keywords: données personnelles en santé,objets connectés,quantified self,big data,assureurs,France
    Date: 2019–09
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:halshs-02295392&r=all
  19. By: Dramsch, Jesper Sören; Christensen, Anders Nymark; MacBeth, Colin; Lüthje, Mikael
    Abstract: We present a novel 3D warping technique for the estimation of 4D seismic time-shift. This unsupervised method provides a diffeomorphic 3D time shift field that includes uncertainties, therefore it does not need prior time-shift data to be trained. This results in a widely applicable method in time-lapse seismic data analysis. We explore the generalization of the method to unseen data both in the same geological setting and in a different field, where the generalization error stays constant and within an acceptable range across test cases. We further explore upsampling of the warp field from a smaller network to decrease computational cost and see some deterioration of the warp field quality as a result.
    Date: 2019–10–31
    URL: http://d.repec.org/n?u=RePEc:osf:eartha:82bnj&r=all
  20. By: Ehsan Hoseinzade; Saman Haratizadeh; Arash Khoeini
    Abstract: The performance of financial market prediction systems depends heavily on the quality of features it is using. While researchers have used various techniques for enhancing the stock specific features, less attention has been paid to extracting features that represent general mechanism of financial markets. In this paper, we investigate the importance of extracting such general features in stock market prediction domain and show how it can improve the performance of financial market prediction. We present a framework called U-CNNpred, that uses a CNN-based structure. A base model is trained in a specially designed layer-wise training procedure over a pool of historical data from many financial markets, in order to extract the common patterns from different markets. Our experiments, in which we have used hundreds of stocks in S\&P 500 as well as 14 famous indices around the world, show that this model can outperform baseline algorithms when predicting the directional movement of the markets for which it has been trained for. We also show that the base model can be fine-tuned for predicting new markets and achieve a better performance compared to the state of the art baseline algorithms that focus on constructing market-specific models from scratch.
    Date: 2019–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1911.12540&r=all

This nep-big issue is ©2019 by Tom Coupé. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.