nep-big New Economics Papers
on Big Data
Issue of 2022‒12‒05
38 papers chosen by
Tom Coupé
University of Canterbury

  1. Does Data Disclosure Improve Local Government Performance? Evidence from Italian Municipalities By Lockwood, Ben; Porcelli, Francesco; Redoano, Michela; Schiavone, Antonio
  2. Smiles in Profiles: Improving Fairness and Efficiency Using Estimates of User Preferences in Online Marketplaces By Susan Athey; Dean Karlan; Emil Palikot; Yuan Yuan
  3. Preferences Single-Peaked on a Tree: Multiwinner Elections and Structural Results By Dominik Peters; Lan Yu; Hau Chan; Edith Elkind
  4. Generational Differences in Automobility: Comparing America's Millennials and Gen Xers Using Gradient Boosting Decision Trees By Wang, Kailai; Wang, Xize
  5. The effects of mandatory speed limits on crash frequency: A causal machine learning approach By Metz-Peeters, Maike
  6. A survey on machine learning methods for churn prediction By Louis Geiler; Séverine Affeldt; Mohamed Nadif
  7. Flexible machine learning estimation of conditional average treatment effects: a blessing and a curse By Richard Post; Isabel van den Heuvel; Marko Petkovic; Edwin van den Heuvel
  8. Uncertainty Aware Trader-Company Method: Interpretable Stock Price Prediction Capturing Uncertainty By Yugo Fujimotol; Kei Nakagawa; Kentaro Imajo; Kentaro Minami
  9. Spectral Representation Learning for Conditional Moment Models By Ziyu Wang; Yucen Luo; Yueru Li; Jun Zhu; Bernhard Sch\"olkopf
  10. Stock Trading Volume Prediction with Dual-Process Meta-Learning By Ruibo Chen; Wei Li; Zhiyuan Zhang; Ruihan Bao; Keiko Harimoto; Xu Sun
  11. Predictive Crypto-Asset Automated Market Making Architecture for Decentralized Finance using Deep Reinforcement Learning By Tristan Lim
  12. The smart green nudge: Reducing product returns through enriched digital footprints & causal machine learning By von Zahn, Moritz; Bauer, Kevin; Mihale-Wilson, Cristina; Jagow, Johanna; Speicher, Max; Hinz, Oliver
  13. Nowcasting GDP using machine learning methods By Dennis Kant; Andreas Pick; Jasper de Winter
  14. The Information Bottleneck Principle in Corporate Hierarchies By Cameron Gordon
  15. The Who, What, When, and How of Industrial Policy: A Text-Based Approach By Juhász, Réka; Lane, Nathaniel; Oehlsen, Emily; Pérez, Verónica C.
  16. (Machine) Learning from the COVID-19 Lockdown about Electricity Market Performance with a Large Share of Renewables By Christoph Graf; Federico Quaglia; Frank A. Wolak
  17. A Data-driven Case-based Reasoning in Bankruptcy Prediction By Wei Li; Wolfgang Karl H\"ardle; Stefan Lessmann
  18. How Communication Makes the Difference between a Cartel and Tacit Collusion: A Machine Learning Approach By Maximilian Andres; Lisa Bruttel; Jana Friedrichsen
  19. Evaluating Impact of Social Media Posts by Executives on Stock Prices By Anubhav Sarkar; Swagata Chakraborty; Sohom Ghosh; Sudip Kumar Naskar
  20. EU Cohesion Policy on the Ground: Analyzing Small-Scale Effects Using Satellite Data By Julia Bachtrögler-Unger; Mathias Dolls; Carla Krolage; Paul Schüle; Hannes Taubenböck; Matthias Weigand
  21. A Neural Network-Based Distributional Constraint Learning Methodology for Mixed-Integer Stochastic Optimization By Alcántara Mata, Antonio; Ruiz Mora, Carlos
  22. A comparison of LSTM and GRU architectures with novel walk-forward approach to algorithmic investment strategy By Illia Baranochnikov; Robert Ślepaczuk
  23. Does women´s political empowerment matter for income inequality? By Miriam Hortas-Rico; Vicente Rios
  24. A Multimodal Embedding-Based Approach to Industry Classification in Financial Markets By Rian Dolphin; Barry Smyth; Ruihai Dong
  25. APPLICATION OF ARTIFICIAL INTELLIGENCE IN BANKING: A STUDY BASED ON SBI-SIA VIRTUAL ASSISTANT By , SHANIMON S Dr; Mathew, Seena Mary
  26. Detección de Anomalías y Poder de Mercado en el Sector Eléctrico Colombiano By Alvaro J. Riascos Villegas; Julian Chitiva; Carlos Salazar
  27. Hybrid Convolutional Neural Network Components By John, Otumu
  28. Investment Portfolio Optimization Based on Modern Portfolio Theory and Deep Learning Models By Maciej Wysocki; Paweł Sakowski
  29. The global geography of digital platforms: towards platforms international locational determinants By Victo José da Silva Neto; Tulio Chiarini; Leonardo Costa Ribeiro; Igor Santos Tupy
  30. Incorporating High-Frequency Weather Data into Consumption Expenditure Predictions By Anders Christensen; Joel Ferguson; Sim\'on Ram\'irez Amaya
  31. Examining the influence of user engagement on tourist virtual reality behavioral response from the human-computer interaction perspective: A PLSSEM-IMP-NN hybrid machine learning approach By Shang, Dawei
  32. State-dependent Asset Allocation Using Neural Networks By Reza Bradrania; Davood Pirayesh Neghab
  33. Immigrant Narratives By Kai Gehring; Joop Adema; Panu Poutvaara; Joop Age Harm Adema
  34. Assessing and Comparing Fixed-Target Forecasts of Arctic Sea Ice:Glide Charts for Feature-Engineered Linear Regression and Machine Learning Models By Francis X. Diebold; Maximilian Gobel; Philippe Goulet Coulombe
  35. A level-set approach to the control of state-constrained McKean-Vlasov equations: application to renewable energy storage and portfolio selection By Maximilien Germain; Huyên Pham; Xavier Warin
  36. Deep Learning for Inflexible Multi-Asset Hedging of incomplete market By Ruochen Xiao; Qiaochu Feng; Ruxin Deng
  37. Daily and intraday application of various architectures of the LSTM model in algorithmic investment strategies on Bitcoin and the S&P 500 Index By Katarzyna Kryńska; Robert Ślepaczuk
  38. Peru: Technical Assistance Report-Consumer Price Index Mission By International Monetary Fund

  1. By: Lockwood, Ben (University of Warwick); Porcelli, Francesco (University of Bari and CAGE); Redoano, Michela (University of Warwick); Schiavone, Antonio (University of Bologna)
    Abstract: We exploit the introduction of an open data online platform - part of a transparency program initiated by the Italian Government in late 2014 - as a natural experiment to analyse the effect of data disclosure on mayors’ expenditure and public good provision. First, we analyse the effect of the program by comparing municipalities on the border between ordinary and special regions, exploiting the fact that the latter regions did not participate in the program. We find that mayors in ordinary regions immediately change their behaviour after data disclosure by improving the disclosed indicators, and that the reaction depends also on their initial relative performance, a yardstick competition effect. Second, we investigate the effect of mayors’ attention to data disclosure within treated regions by tracking their daily accesses to the platform, which we instrument with the daily publication of newspaper articles mentioning the program. We find that mayors react to data disclosure by decreasing spending via a reduction of service provision, resulting in an aggregate decrease in efficiency. Overall, mayors seem to target variables that are disclosed on the website at the expense of variables that are less salient.
    Keywords: open data ; local government ; media coverage ; OpenCivitas JEL Codes: H72 ; H79
    URL: http://d.repec.org/n?u=RePEc:wrk:wqapec:17&r=big
  2. By: Susan Athey; Dean Karlan; Emil Palikot; Yuan Yuan
    Abstract: Online platforms often face challenges being both fair (i.e., non-discriminatory) and efficient (i.e., maximizing revenue). Using computer vision algorithms and observational data from a microlending marketplace, we find that choices made by borrowers creating online profiles impact both of these objectives. We further support this conclusion with a web-based randomized survey experiment. In the experiment, we create profile images using Generative Adversarial Networks that differ in a specific feature and estimate its impact on lender demand. We then counterfactually evaluate alternative platform policies and identify particular approaches to influencing the changeable profile photo features that can ameliorate the fairness-efficiency tension.
    JEL: D0 D40 J0 O1
    Date: 2022–11
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:30633&r=big
  3. By: Dominik Peters (LAMSADE - Laboratoire d'analyse et modélisation de systèmes pour l'aide à la décision - Université Paris Dauphine-PSL - PSL - Université Paris sciences et lettres - CNRS - Centre National de la Recherche Scientifique); Lan Yu; Hau Chan (University of Nebraska–Lincoln - University of Nebraska System); Edith Elkind (University of Oxford [Oxford])
    Abstract: A preference profile is single-peaked on a tree if the candidate set can be equipped with a tree structure so that the preferences of each voter are decreasing from their top candidate along all paths in the tree. This notion was introduced by Demange (1982), and subsequently Trick (1989b) described an efficient algorithm for deciding if a given profile is single-peaked on a tree. We study the complexity of multiwinner elections under several variants of the Chamberlin-Courant rule for preferences single-peaked on trees. We show that in this setting the egalitarian version of this rule admits a polynomial-time winner determination algorithm. For the utilitarian version, we prove that winner determination remains NP-hard for the Borda scoring function; indeed, this hardness results extends to a large family of scoring functions. However, a winning committee can be found in polynomial time if either the number of leaves or the number of internal vertices of the underlying tree is bounded by a constant. To benefit from these positive results, we need a procedure that can determine whether a given profile is single-peaked on a tree that has additional desirable properties (such as, e.g., a small number of leaves). To address this challenge, we develop a structural approach that enables us to compactly represent all trees with respect to which a given profile is single-peaked. We show how to use this representation to efficiently find the best tree for a given profile for use with our winner determination algorithms: Given a profile, we can efficiently find a tree with the minimum number of leaves, or a tree with the minimum number of internal vertices among trees on which the profile is single-peaked. We then explore the power and limitations of this framework: we develop polynomial-time algorithms to find trees with the smallest maximum degree, diameter, or pathwidth, but show that it is NP-hard to check whether a given profile is single-peaked on a tree that is isomorphic to a given tree, or on a regular tree.
    Date: 2022–01–12
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-03834509&r=big
  4. By: Wang, Kailai; Wang, Xize (National University of Singapore)
    Abstract: Whether the Millennials are less auto-centric than the previous generations has been widely discussed in the literature. Most existing studies use regression models and assume that all factors are linear-additive in contributing to the young adults' driving behaviors. This study relaxes this assumption by applying a non-parametric statistical learning method, namely the gradient boosting decision trees (GBDT). Using U.S. nationwide travel surveys for 2001 and 2017, this study examines the non-linear dose-response effects of lifecycle, socio-demographic and residential factors on daily driving distances of Millennial and Gen-X young adults. Holding all other factors constant, Millennial young adults had shorter predicted daily driving distances than their Gen-X counterparts. Besides, residential and economic factors explain around 50% of young adults' daily driving distances, while the collective contributions for life course events and demographics are about 33%. This study also identifies the density ranges for formulating effective land use policies aiming at reducing automobile travel demand.
    Date: 2021–05–11
    URL: http://d.repec.org/n?u=RePEc:osf:socarx:n3a9e&r=big
  5. By: Metz-Peeters, Maike
    Abstract: This study analyzes the effects of binding speed limits on crash frequency on German motorways. Various geo-spatial data sources are merged to a new data set providing rich information on roadway characteristics for 500-meter segments of large parts of the German motorway network. The empirical analysis uses a causal forest, which allows to estimate the effects of speed limits on crash frequency under fairly weak assumptions about the underlying data generating process and provides insights into treatment effect heterogeneity. The paper is the first to explicitly discuss possible pitfalls and potential solutions when applying causal forests to geo-spatial data. Substantial negative effects of three levels of speed limits on accident rates are found, being largest for severe, and especially fatal crash rates, while effects on light crash rates are rather moderate. The heterogeneity analysis suggest that the effects are larger for less congested roads, as well as for roads with entrance and exit ramps, while heterogeneity regarding shares of heavy traffic is inconclusive.
    Keywords: Crash frequency,speed limits,German Autobahn,causal machine learning,causal forest,spatial machine learning
    JEL: R41 R42 R48
    Date: 2022
    URL: http://d.repec.org/n?u=RePEc:zbw:rwirep:982&r=big
  6. By: Louis Geiler (CB - CB - Centre Borelli - UMR 9010 - Service de Santé des Armées - INSERM - Institut National de la Santé et de la Recherche Médicale - Université Paris-Saclay - CNRS - Centre National de la Recherche Scientifique - ENS Paris Saclay - Ecole Normale Supérieure Paris-Saclay - UPCité - Université Paris Cité); Séverine Affeldt (CB - CB - Centre Borelli - UMR 9010 - Service de Santé des Armées - INSERM - Institut National de la Santé et de la Recherche Médicale - Université Paris-Saclay - CNRS - Centre National de la Recherche Scientifique - ENS Paris Saclay - Ecole Normale Supérieure Paris-Saclay - UPCité - Université Paris Cité); Mohamed Nadif (CB - CB - Centre Borelli - UMR 9010 - Service de Santé des Armées - INSERM - Institut National de la Santé et de la Recherche Médicale - Université Paris-Saclay - CNRS - Centre National de la Recherche Scientifique - ENS Paris Saclay - Ecole Normale Supérieure Paris-Saclay - UPCité - Université Paris Cité)
    Abstract: The diversity and specificities of today's businesses have leveraged a wide range of prediction techniques. In particular, churn prediction is a major economic concern for many companies. The purpose of this study is to draw general guidelines from a benchmark of supervised machine learning techniques in association with widely used data sampling approaches on publicly available datasets in the context of churn prediction. Choosing a priori the most appropriate sampling method as well as the most suitable classification model is not trivial, as it strongly depends on the data intrinsic characteristics. In this paper we study the behavior of eleven supervised and semi-supervised learning methods and seven sampling approaches on sixteen diverse and publicly available churn-like datasets. Our evaluations, reported in terms of the Area Under the Curve (AUC) metric, explore the influence of sampling approaches and data characteristics on the performance of the studied learning methods. Besides, we propose Nemenyi test and Correspondence Analysis as means of comparison and visualization of the association between classification algorithms, sampling methods and datasets. Most importantly, our experiments lead to a practical recommendation for a prediction pipeline based on an ensemble approach. Our proposal can be successfully applied to a wide range of churn-like datasets.
    Keywords: churn prediction,machine learning,ensemble technique
    Date: 2022
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-03824873&r=big
  7. By: Richard Post; Isabel van den Heuvel; Marko Petkovic; Edwin van den Heuvel
    Abstract: Causal inference from observational data requires untestable assumptions. If these assumptions apply, machine learning (ML) methods can be used to study complex forms of causal-effect heterogeneity. Several ML methods were developed recently to estimate the conditional average treatment effect (CATE). If the features at hand cannot explain all heterogeneity, the individual treatment effects (ITEs) can seriously deviate from the CATE. In this work, we demonstrate how the distributions of the ITE and the estimated CATE can differ when a causal random forest (CRF) is applied. We extend the CRF to estimate the difference in conditional variance between treated and controls. If the ITE distribution equals the CATE distribution, this difference in variance should be small. If they differ, an additional causal assumption is necessary to quantify the heterogeneity not captured by the CATE distribution. The conditional variance of the ITE can be identified when the individual effect is independent of the outcome under no treatment given the measured features. Then, in the cases where the ITE and CATE distributions differ, the extended CRF can appropriately estimate the characteristics of the ITE distribution while the CRF fails to do so.
    Date: 2022–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2210.16547&r=big
  8. By: Yugo Fujimotol; Kei Nakagawa; Kentaro Imajo; Kentaro Minami
    Abstract: Machine learning is an increasingly popular tool with some success in predicting stock prices. One promising method is the Trader-Company~(TC) method, which takes into account the dynamism of the stock market and has both high predictive power and interpretability. Machine learning-based stock prediction methods including the TC method have been concentrating on point prediction. However, point prediction in the absence of uncertainty estimates lacks credibility quantification and raises concerns about safety. The challenge in this paper is to make an investment strategy that combines high predictive power and the ability to quantify uncertainty. We propose a novel approach called Uncertainty Aware Trader-Company Method~(UTC) method. The core idea of this approach is to combine the strengths of both frameworks by merging the TC method with the probabilistic modeling, which provides probabilistic predictions and uncertainty estimations. We expect this to retain the predictive power and interpretability of the TC method while capturing the uncertainty. We theoretically prove that the proposed method estimates the posterior variance and does not introduce additional biases from the original TC method. We conduct a comprehensive evaluation of our approach based on the synthetic and real market datasets. We confirm with synthetic data that the UTC method can detect situations where the uncertainty increases and the prediction is difficult. We also confirmed that the UTC method can detect abrupt changes in data generating distributions. We demonstrate with real market data that the UTC method can achieve higher returns and lower risks than baselines.
    Date: 2022–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2210.17030&r=big
  9. By: Ziyu Wang; Yucen Luo; Yueru Li; Jun Zhu; Bernhard Sch\"olkopf
    Abstract: Many problems in causal inference and economics can be formulated in the framework of conditional moment models, which characterize the target function through a collection of conditional moment restrictions. For nonparametric conditional moment models, efficient estimation has always relied on preimposed conditions on various measures of ill-posedness of the hypothesis space, which are hard to validate when flexible models are used. In this work, we address this issue by proposing a procedure that automatically learns representations with controlled measures of ill-posedness. Our method approximates a linear representation defined by the spectral decomposition of a conditional expectation operator, which can be used for kernelized estimators and is known to facilitate minimax optimal estimation in certain settings. We show this representation can be efficiently estimated from data, and establish L2 consistency for the resulting estimator. We evaluate the proposed method on proximal causal inference tasks, exhibiting promising performance on high-dimensional, semi-synthetic data.
    Date: 2022–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2210.16525&r=big
  10. By: Ruibo Chen; Wei Li; Zhiyuan Zhang; Ruihan Bao; Keiko Harimoto; Xu Sun
    Abstract: Volume prediction is one of the fundamental objectives in the Fintech area, which is helpful for many downstream tasks, e.g., algorithmic trading. Previous methods mostly learn a universal model for different stocks. However, this kind of practice omits the specific characteristics of individual stocks by applying the same set of parameters for different stocks. On the other hand, learning different models for each stock would face data sparsity or cold start problems for many stocks with small capitalization. To take advantage of the data scale and the various characteristics of individual stocks, we propose a dual-process meta-learning method that treats the prediction of each stock as one task under the meta-learning framework. Our method can model the common pattern behind different stocks with a meta-learner, while modeling the specific pattern for each stock across time spans with stock-dependent parameters. Furthermore, we propose to mine the pattern of each stock in the form of a latent variable which is then used for learning the parameters for the prediction module. This makes the prediction procedure aware of the data pattern. Extensive experiments on volume predictions show that our method can improve the performance of various baseline models. Further analyses testify the effectiveness of our proposed meta-learning framework.
    Date: 2022–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2211.01762&r=big
  11. By: Tristan Lim
    Abstract: The study proposes a quote-driven predictive automated market maker (AMM) platform with on-chain custody and settlement functions, alongside off-chain predictive reinforcement learning capabilities to improve liquidity provision of real-world AMMs. The proposed AMM architecture is an augmentation to the Uniswap V3, a cryptocurrency AMM protocol, by utilizing a novel market equilibrium pricing for reduced divergence and slippage loss. Further, the proposed architecture involves a predictive AMM capability, utilizing a deep hybrid Long Short-Term Memory (LSTM) and Q-learning reinforcement learning framework that looks to improve market efficiency through better forecasts of liquidity concentration ranges, so liquidity starts moving to expected concentration ranges, prior to asset price movement, so that liquidity utilization is improved. The augmented protocol framework is expected have practical real-world implications, by (i) reducing divergence loss for liquidity providers, (ii) reducing slippage for crypto-asset traders, while (iii) improving capital efficiency for liquidity provision for the AMM protocol. To our best knowledge, there are no known protocol or literature that are proposing similar deep learning-augmented AMM that achieves similar capital efficiency and loss minimization objectives for practical real-world applications.
    Date: 2022–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2211.01346&r=big
  12. By: von Zahn, Moritz; Bauer, Kevin; Mihale-Wilson, Cristina; Jagow, Johanna; Speicher, Max; Hinz, Oliver
    Abstract: With free delivery of products virtually being a standard in E-commerce, product returns pose a major challenge for online retailers and society. For retailers, product returns involve significant transportation, labor, disposal, and administrative costs. From a societal perspective, product returns contribute to greenhouse gas emissions and packaging disposal and are often a waste of natural resources. Therefore, reducing product returns has become a key challenge. This paper develops and validates a novel smart green nudging approach to tackle the problem of product returns during customers' online shopping processes. We combine a green nudge with a novel data enrichment strategy and a modern causal machine learning method. We first run a large-scale randomized field experiment in the online shop of a German fashion retailer to test the efficacy of a novel green nudge. Subsequently, we fuse the data from about 50,000 customers with publicly-available aggregate data to create what we call enriched digital footprints and train a causal machine learning system capable of optimizing the administration of the green nudge. We report two main findings: First, our field study shows that the large-scale deployment of a simple, low-cost green nudge can significantly reduce product returns while increasing retailer profits. Second, we show how a causal machine learning system trained on the enriched digital footprint can amplify the effectiveness of the green nudge by "smartly" administering it only to certain types of customers. Overall, this paper demonstrates how combining a low-cost marketing instrument, a privacy-preserving data enrichment strategy, and a causal machine learning method can create a win-win situation from both an environmental and economic perspective by simultaneously reducing product returns and increasing retailers' profits.
    Keywords: Product returns,Green Nudging,Causal Machine Learning,Enriched Digital Footprint
    Date: 2022
    URL: http://d.repec.org/n?u=RePEc:zbw:safewp:363&r=big
  13. By: Dennis Kant; Andreas Pick; Jasper de Winter
    Abstract: This paper compares the ability of several econometric and machine learning methods to nowcast GDP in (pseudo) real-time. The analysis takes the example of Dutch GDP over the years 1992-2018 using a broad data set of monthly indicators. It discusses the forecast accuracy but also analyzes the use of information from the large data set of regressors. We find that the random forest forecast provides the most accurate nowcasts while using the different variables in a relative stable and equal manner.
    Keywords: factor models; forecasting competition; machine learning methods; nowcasting.
    JEL: C32 C53 E37
    Date: 2022–11
    URL: http://d.repec.org/n?u=RePEc:dnb:dnbwpp:754&r=big
  14. By: Cameron Gordon
    Abstract: The hierarchical nature of corporate information processing is a topic of great interest in economic and management literature. Firms are characterised by a need to make complex decisions, often aggregating partial and uncertain information, which greatly exceeds the attention capacity of constituent individuals. However, the efficient transmission of these signals is still not fully understood. Recently, the information bottleneck principle has emerged as a powerful tool for understanding the transmission of relevant information through intermediate levels in a hierarchical structure. In this paper we note that the information bottleneck principle may similarly be applied directly to corporate hierarchies. In doing so we provide a bridge between organisation theory and that of rapidly expanding work in deep neural networks (DNNs), including the use of skip connections as a means of more efficient transmission of information in hierarchical organisations.
    Date: 2022–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2210.14861&r=big
  15. By: Juhász, Réka; Lane, Nathaniel (University of Oxford); Oehlsen, Emily; Pérez, Verónica C.
    Abstract: Although questions surrounding industrial policy are fundamental, we lack both measures and comprehensive data on industrial policy. Consequently, scholars and practitioners lack a systematic picture of industrial policy practice. This paper provides a new, text-based approach to measuring industrial policy. We take the tools of supervised machine learning to a comprehensive, English-language database of economic policy to construct measures of industrial policy at the country, industry, and year level. We use this data to establish four fundamental facts about global industrial policy from 2009 to 2020. First, IP is common (25 percent of policies in our database) and has been trending upward since 2010. Second, industrial policy is technocratic and granular, taking the form of subsidies and export promotion measures targeted at individual firms, instead of tariffs. Third, the countries engaged most in IP tend to be wealthier (top income quintile) liberal democracies, and IP is very rare among the poorest nations (bottom quintile). Fourth, IP tends to be targeted toward a small share of industries, and targeting is highly correlated with an industry’s revealed comparative advantage. Thus, we find contemporary practice is a far cry from industrial policy’s past and tends toward selective, export-oriented policies used by the world’s most developed economies.
    Date: 2022–08–14
    URL: http://d.repec.org/n?u=RePEc:osf:socarx:uyxh9&r=big
  16. By: Christoph Graf; Federico Quaglia; Frank A. Wolak
    Abstract: The negative demand shock due to the COVID-19 lockdown has reduced net demand for electricity -- system demand less amount of energy produced by intermittent renewables, hydroelectric units, and net imports -- that must be served by controllable generation units. Under normal demand conditions, introducing additional renewable generation capacity reduces net demand. Consequently, the lockdown can provide insights about electricity market performance with a large share of renewables. We find that although the lockdown reduced average day-ahead prices in Italy by 45%, re-dispatch costs increased by 73%, both relative to the average of the same magnitude for the same period in previous years. We estimate a deep-learning model using data from 2017--2019 and find that predicted re-dispatch costs during the lockdown period are only 26% higher than the same period in previous years. We argue that the difference between actual and predicted lockdown period re-dispatch costs is the result of increased opportunities for suppliers with controllable units to exercise market power in the re-dispatch market in these persistently low net demand conditions. Our results imply that without grid investments and other technologies to manage low net demand conditions, an increased share of intermittent renewables is likely to increase costs of maintaining a reliable grid.
    Date: 2022–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2211.02196&r=big
  17. By: Wei Li; Wolfgang Karl H\"ardle; Stefan Lessmann
    Abstract: There has been intensive research regarding machine learning models for predicting bankruptcy in recent years. However, the lack of interpretability limits their growth and practical implementation. This study proposes a data-driven explainable case-based reasoning (CBR) system for bankruptcy prediction. Empirical results from a comparative study show that the proposed approach performs superior to existing, alternative CBR systems and is competitive with state-of-the-art machine learning models. We also demonstrate that the asymmetrical feature similarity comparison mechanism in the proposed CBR system can effectively capture the asymmetrically distributed nature of financial attributes, such as a few companies controlling more cash than the majority, hence improving both the accuracy and explainability of predictions. In addition, we delicately examine the explainability of the CBR system in the decision-making process of bankruptcy prediction. While much research suggests a trade-off between improving prediction accuracy and explainability, our findings show a prospective research avenue in which an explainable model that thoroughly incorporates data attributes by design can reconcile the dilemma.
    Date: 2022–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2211.00921&r=big
  18. By: Maximilian Andres; Lisa Bruttel; Jana Friedrichsen
    Abstract: This paper sheds new light on the role of communication for cartel formation. Using machine learning to evaluate free-form chat communication among firms in a laboratory experiment, we identify typical communication patterns for both explicit cartel formation and indirect attempts to collude tacitly. We document that firms are less likely to communicate explicitly about price fixing and more likely to use indirect messages when sanctioning institutions are present. This effect of sanctions on communication reinforces the direct cartel-deterring effect of sanctions as collusion is more difficult to reach and sustain without an explicit agreement. Indirect messages have no, or even a negative, effect on prices.
    Keywords: cartel, collusion, communication, machine learning, experiment
    JEL: C92 D43 L41
    Date: 2022
    URL: http://d.repec.org/n?u=RePEc:ces:ceswps:_10024&r=big
  19. By: Anubhav Sarkar; Swagata Chakraborty; Sohom Ghosh; Sudip Kumar Naskar
    Abstract: Predicting stock market movements has always been of great interest to investors and an active area of research. Research has proven that popularity of products is highly influenced by what people talk about. Social media like Twitter, Reddit have become hotspots of such influences. This paper investigates the impact of social media posts on close price prediction of stocks using Twitter and Reddit posts. Our objective is to integrate sentiment of social media data with historical stock data and study its effect on closing prices using time series models. We carried out rigorous experiments and deep analysis using multiple deep learning based models on different datasets to study the influence of posts by executives and general people on the close price. Experimental results on multiple stocks (Apple and Tesla) and decentralised currencies (Bitcoin and Ethereum) consistently show improvements in prediction on including social media data and greater improvements on including executive posts.
    Date: 2022–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2211.01287&r=big
  20. By: Julia Bachtrögler-Unger; Mathias Dolls; Carla Krolage; Paul Schüle; Hannes Taubenböck; Matthias Weigand
    Abstract: We present a novel approach for analyzing the effects of EU cohesion policy on local economic activity. For all municipalities in the border area of the Czech Republic, Germany and Poland, we collect project-level data on EU funding in the period between 2007 and 2013. Using night light emission data as a proxy for economic development, we show that the receipt of a higher amount of EU funding is associated with increased economic activity at the municipal level. Our paper demonstrates that remote sensing data can provide an effective way to model local economic development also in Europe, where no comprehensive cross-border data is available at such a spatially granular level.
    Keywords: regional development, EU cohesion policy, remote sensing
    JEL: R11 O18 H54
    Date: 2022
    URL: http://d.repec.org/n?u=RePEc:ces:ceswps:_10056&r=big
  21. By: Alcántara Mata, Antonio; Ruiz Mora, Carlos
    Abstract: The use of machine learning methods helps to improve decision making in different fields. In particular, the idea of bridging predictions (machine learning models) and prescriptions (optimization problems) is gaining attention within the scientific community. One of the main ideas to address this trade-off is the so-called Constraint Learning (CL) methodology, where the structures of the machine learning model can be treated as a set of constraints to be embedded within the optimization problem, establishing therelationship between a direct decision variable x and a response variable y. However, most CL approaches have focused on making point predictions for a certain variable, not taking into account the statistical and external uncertainty faced in the modeling process. In this paper, we extend the CL methodology to deal with uncertainty in the response variable y. The novel Distributional Constraint Learning (DCL) methodology makes use of a piece-wise linearizable neural network-based model to estimate the parametersof the conditional distribution of y (dependent on decisions x and contextualinformation), which can be embedded within mixed-integer optimization problems. In particular, we formulate a stochastic optimization problem by sampling random values from the estimated distribution by using a linear set of constraints. In this sense, DCL combines both the high predictive performance of the neural network method and the possibility of generating scenarios to account for uncertainty within a tractable optimization model. The behavior of the proposed methodology is tested in a real-worldproblem in the context of electricity systems, where a Virtual Power Plant seeks to optimize its operation, subject to different forms of uncertainty, and with price-responsive consumers.
    Keywords: Stochastic Optimization; Constraint Learning; Distribution Estimation; Neural Networks; Mixed-Integer Optimization
    Date: 2022–11–21
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:36072&r=big
  22. By: Illia Baranochnikov (University of Warsaw, Faculty of Economic Sciences; Quantitative Finance Research Group); Robert Ślepaczuk (University of Warsaw, Faculty of Economic Sciences, Department of Quantitative Finance; Quantitative Finance Research Group)
    Abstract: The aim of this work is to build a profitable algorithmic investment strategy on various types of assets. The algorithm is built using recurrent neural networks (LSTM and GRU) as the primary source of signals to buy/sell financial instruments. LSTM and GRU architectures are compared in terms of obtaining the best results and beating the market. The algorithm is tested for four financial instruments (Bitcoin, Tesla, Brent Oil and Gold) on daily and hourly data frequencies. The out-of-sample period is from 1 January 2021 to 1 April 2022. A walk-forward process is responsible for training models and selecting the best model to forecast asset prices in the future. Ten model architectures with various hyperparameters are trained during each step of the walk-forward process. The model architecture with the highest Information Ratio (IR*) in the validation period is used for forecasting in the out-of-sample period. For each strategy, the performance metrics are calculated based on which the profitability of the algorithm is evaluated. At the end, a detailed sensitivity analysis with regards to the main hyperparameters is conducted. The results reveal that LSTM outperforms GRU in most of the cases and that investment strategy built based on LSTM/GRU architecture is able to beat the market only on 50% of tested cases.
    Keywords: deep learning, recurrent neural networks, algorithm, trading strategy, LSTM, GRU, walk-forward process
    JEL: C4 C14 C45 C53 C58 G13
    Date: 2022
    URL: http://d.repec.org/n?u=RePEc:war:wpaper:2022-21&r=big
  23. By: Miriam Hortas-Rico; Vicente Rios
    Abstract: This paper analyzes the endogenous relationship between women's political empowerment and income inequality in a sample of 142 countries between 1990 and 2019. To identify causal effects, we rely on the use of Random Forests techniques and a set of exogenous variables on ancestral and traditional cultural norms of gender roles. These tree-based machine learning statistical techniques help us to predict current women's political empowerment with high accuracy solely using ancestral societal traits. This predicted variable is then used in the second stage of the IV estimation of a panel data specification of income inequality. Our panel-IV regressions show that women's political empowerment reduces income inequality, measured as the Gini index of disposable income. This finding is robust to the presence of spatial interdependence and time persistence in inequality outcomes, as well as to the potential bias due to the omission of unobservable variables, the presence of outliers and inuential observations, and an alternative de_nition of income inequality.
    Keywords: women's political empowerment, income inequality, machine learning, instrumental variables.
    JEL: C23 C26 C53 D31 D63 I31 J16
    Date: 2022–11
    URL: http://d.repec.org/n?u=RePEc:gov:wpaper:2206&r=big
  24. By: Rian Dolphin; Barry Smyth; Ruihai Dong
    Abstract: Industry classification schemes provide a taxonomy for segmenting companies based on their business activities. They are relied upon in industry and academia as an integral component of many types of financial and economic analysis. However, even modern classification schemes have failed to embrace the era of big data and remain a largely subjective undertaking prone to inconsistency and misclassification. To address this, we propose a multimodal neural model for training company embeddings, which harnesses the dynamics of both historical pricing data and financial news to learn objective company representations that capture nuanced relationships. We explain our approach in detail and highlight the utility of the embeddings through several case studies and application to the downstream task of industry classification.
    Date: 2022–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2211.06378&r=big
  25. By: , SHANIMON S Dr; Mathew, Seena Mary
    Abstract: Artificial intelligence (AI) is now widely acknowledged as one of the most important digital transformation enablers across a significant number of industries. Artificial intelligence (AI) has the potential to facilitate enterprises. become more imaginative, versatile, and adaptable than they have ever been. AI is already being applied to enhance productivity and competitiveness while also driving digital transformation in a range of organizations. AI is supporting Indian banks in upgrading their operations across the board, from accounting to sales to contracts and cybersecurity. This is a case study based on virtual assistant of SBI-SIA. Recent developments and emergence of virtual banking and the trends in the modern banking systems explained in this study.
    Date: 2022–07–28
    URL: http://d.repec.org/n?u=RePEc:osf:socarx:gkxh8&r=big
  26. By: Alvaro J. Riascos Villegas; Julian Chitiva; Carlos Salazar
    Abstract: En este trabajo introducimos una metodología de generación de alertas de potenciales prácticas anticompetitivas en el mercado mayorista de electricidad colombiano. La metodología se compone de dos partes: (1) Con base en la disponibilidad declarada de los agentes, se identifican aquellos que potencialmente pueden tener un impacto alto en el precio de bolsa (i.e., pivotales en el sentido del índice de oferta residual - IOR) y (2) Usando métodos de aprendizaje de máquinas se identifican las ofertas de energía (i.e., precios) de aquellos agentes pivotales que, de acuerdo al estado del mercado y su historia (i.e., oferta pasadas, recursos hídricos, tecnología de generación, etc.) se podrían considerar atípicos o anómalos. Con base en estos dos indicadores se generan alertas de potenciales prácticas anticompetitivas. Reportamos los resultados de la aplicación de esta metodología al mercado mayorista colombiano en el período Agosto 16, 2018 - Julio 30, 2019. Una característica importante de esta metodología es que puede ser aplicada con la información disponible del operador del sistema, 24 horas antes de que se observen los resultados del mercado y generando alertas ex-ante a la realización de los eventos. Esta posibilidad de generar alertas casi en tiempo real es aun más importante de cara al nuevo mercado intradiario que próximamente entraría en rigor en el sistema eléctrico colombiano. **** We introduce a methodology for generating alerts of potential anti-competitive practices in the Colombian wholesale electricity market. The methodology is made up of two parts: (1) Based on the declared availability of the agents, those that can potentially have a high impact on the stock price are identified (i.e., pivotal in the sense of the residual supply index - IOR ) and (2) Using machine learning methods, the energy offers (i.e., prices) of those pivotal agents are identified that, according to the state of the market and its history (i.e., past offers, water resources, generation technology, etc.) could be considered atypical or anomalous. Based on these two indicators, alerts of potential anti-competitive practices are generated. We report the results of the application of this methodology to the Colombian wholesale market in the period August 16, 2018 - July 30, 2019. An important characteristic of this methodology is that it can be applied with the information available from the system operator, 24 hours before that the results of the market are observed and generating alerts ex ante to the realization of the events. This possibility of generating alerts almost in real time is even more important in view of the new intra-day market that will soon come into force in the Colombian electricity system.
    Keywords: Pool Electricity Markets, Anomaly Detection, Market Power, Machine Learning, Pool Electricity Markets, Anomaly Detection, Market Power, Machine Learning
    JEL: H62 H63 J23 J31
    Date: 2022–11
    URL: http://d.repec.org/n?u=RePEc:bdr:borrec:1217&r=big
  27. By: John, Otumu
    Abstract: Hybrid Convolutional Neural Network Components
    Date: 2022–10–29
    URL: http://d.repec.org/n?u=RePEc:osf:osfxxx:9xntm&r=big
  28. By: Maciej Wysocki (University of Warsaw, Faculty of Economic Sciences; Quantitative Finance Research Group); Paweł Sakowski (University of Warsaw, Faculty of Economic Sciences; Quantitative Finance Research Group)
    Abstract: This paper investigates an important problem of an appropriate variance-covariance matrix estimation in the Modern Portfolio Theory. In this study we propose a novel framework for variance-covariance matrix estimation for purposes of the portfolio optimization, which is based on deep learning models. We employ the long short-term memory (LSTM) recurrent neural networks (RNN) along with two probabilistic deep learning models: DeepVAR and GPVAR to the task of one-day ahead multivariate forecasting. We then use these forecasts to optimize portfolios that consisted of stocks and cryptocurrencies. Our analysis presents results across different combinations of observation windows and rebalancing periods to compare performances of classical and deep learning variance-covariance estimation methods. The conclusions of the study are that although the strategies (portfolios) performance differed significantly between different combinations of parameters, generally the best results in terms of the information ratio and annualized returns are obtained using the LSTM-RNN models. Moreover, longer observation windows translate into better performance of the deep learning models indicating that these methods require longer windows to be able to efficiently capture the long-term dependencies of the variance-covariance matrix structure. Strategies with less frequent rebalancing typically perform better than these with the shortest rebalancing windows across all considered methods.
    Keywords: Portfolio Optimization, Deep Learning, Variance-Covariance Matrix Forecasting, Investment Strategies, Recurrent Neural Networks, Long Short-Term Memory Neural Networks
    JEL: C4 C14 C45 C53 C58 G11
    Date: 2022
    URL: http://d.repec.org/n?u=RePEc:war:wpaper:2022-12&r=big
  29. By: Victo José da Silva Neto (Radboud University); Tulio Chiarini (IPEA); Leonardo Costa Ribeiro (Cedeplar/UFMG); Igor Santos Tupy (UFV)
    Abstract: Digital platforms have positioned themselves at the center of global flows of capital, knowledge, and work. Their ability to influence and organize these flows makes it imperative to understand the locational decisions of platform companies. This paper explores new evidence on the digital platform economy geography. Our objective is threefold. First, we propose a novel methodology using data science and artificial intelligence tools to identify platform companies. Second, with a set of over three thousand companies, we introduce worldwide maps where it is possible to see the countries and cities that host platform companies. Third, we present platform companies’ locational choice using econometric models.While we observe a geographic concentration of platform companies in the U.S. and China, we also see that digital platform companies are spreading to all geographical directions, including tax havens, reinforcing the hypothesis that "platforming" is a worldwide phenomenon.
    Keywords: Platformization; Platform capitalism; Natural language processing; Zero-Inflated Negative Binomial regression model; Orbis
    JEL: F01 L86 O33
    Date: 2022–11
    URL: http://d.repec.org/n?u=RePEc:cdp:texdis:td650&r=big
  30. By: Anders Christensen; Joel Ferguson; Sim\'on Ram\'irez Amaya
    Abstract: Recent efforts have been very successful in accurately mapping welfare in datasparse regions of the world using satellite imagery and other non-traditional data sources. However, the literature to date has focused on predicting a particular class of welfare measures, asset indices, which are relatively insensitive to short term fluctuations in well-being. We suggest that predicting more volatile welfare measures, such as consumption expenditure, substantially benefits from the incorporation of data sources with high temporal resolution. By incorporating daily weather data into training and prediction, we improve consumption prediction accuracy significantly compared to models that only utilize satellite imagery.
    Date: 2022–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2211.01406&r=big
  31. By: Shang, Dawei
    Abstract: Due to the impact of the COVID-19 pandemic, new attraction ways are tended to be adapted by compelling sites to provide tours product and services, such as virtual reality (VR) to visitors. Based on human-computer interaction (HCI) user engagement and domain segmentation innovativeness theory, we develop and test a theoretical framework using a hybrid partial least squares structural equation model (PLSSEM) with Importance Performance Matrix (IMP) and neural network machine learning approach (PLSSEM-IMP-NN) that examines key user engagement drivers of visitors’ attitude toward VR (ATT) and in-person tour intentions (ITI) during COVID-19. According to a sample of visitors' response, the results demonstrate that a) user engagement including aesthetic appeal, focused attention, perceived usability, and reward experience, raise attitude toward VR; b) product-possessing innovativeness positively moderates the relationships between ATT and ITI; c) information-possessing innovativeness negatively moderates the relationships between ATT and ITI; d) ATT exert the mediating effect between user engagement and ITI. The proposed new PLSSEM-IMP-NN approach has been examined and denotes its efficient and effective in HCI and behavioral response assessment. Other contributions to theories and practical implications are discussed accordingly.
    Date: 2022–10–15
    URL: http://d.repec.org/n?u=RePEc:osf:osfxxx:hp259&r=big
  32. By: Reza Bradrania; Davood Pirayesh Neghab
    Abstract: Changes in market conditions present challenges for investors as they cause performance to deviate from the ranges predicted by long-term averages of means and covariances. The aim of conditional asset allocation strategies is to overcome this issue by adjusting portfolio allocations to hedge changes in the investment opportunity set. This paper proposes a new approach to conditional asset allocation that is based on machine learning; it analyzes historical market states and asset returns and identifies the optimal portfolio choice in a new period when new observations become available. In this approach, we directly relate state variables to portfolio weights, rather than firstly modeling the return distribution and subsequently estimating the portfolio choice. The method captures nonlinearity among the state (predicting) variables and portfolio weights without assuming any particular distribution of returns and other data, without fitting a model with a fixed number of predicting variables to data and without estimating any parameters. The empirical results for a portfolio of stock and bond indices show the proposed approach generates a more efficient outcome compared to traditional methods and is robust in using different objective functions across different sample periods.
    Date: 2022–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2211.00871&r=big
  33. By: Kai Gehring; Joop Adema; Panu Poutvaara; Joop Age Harm Adema
    Abstract: Immigration is one of the most divisive political issues in many countries today. Competing narratives, circulated via the media, are crucial in shaping how immigrants’ role in society is perceived. We propose a new method combining advanced natural language processing tools with dictionaries to identify sentences containing one or more of seven immigrant narrative themes and assign a sentiment to each of these. Our narrative dataset covers 107,428 newspaper articles from 70 German newspapers over the 2000 to 2019 period. Using 16 human coders to evaluate our method, we find that it clearly outperforms simple word-matching methods and sentiment dictionaries. Empirically, culture narratives are more common than economy-related narratives. Narratives related to work and entrepreneurship are particularly positive, while foreign religion and welfare narratives tend to be negative. We use three distinct events to show how different types of shocks influence narratives, decomposing sentiment shifts into theme-composition and within-theme changes.
    Keywords: narrative economics, immigration, media, newspapers, voting
    JEL: F22 J15 C81 Z13 D72
    Date: 2022
    URL: http://d.repec.org/n?u=RePEc:ces:ceswps:_10026&r=big
  34. By: Francis X. Diebold (University of Pennsylvania); Maximilian Gobel (University of Lisbon); Philippe Goulet Coulombe (University of Quebec)
    Abstract: We use "glide charts" (plots of sequences of root mean squared forecast errors as the target date is approached) to evaluate and compare fixed-target forecasts of Arctic sea ice. We first use them to evaluate the simple feature-engineered linear regression (FELR) forecasts of Diebold and Gobel (2022), and to compare FELR forecasts to naive pure-trend benchmark forecasts. Then we introduce a much more sophisticated feature-engineered machine learning (FEML) model, and we use glide charts to evaluate FEML forecasts and compare them to a FELR benchmark. Our substantive results include the frequent appearance of predictability thresholds, which differ across months, meaning that accuracy initially fails to improve as the target date is approached but then increases progressively once a threshold lead time is crossed. Also, we find that FEML can improve appreciably over FELR when forecasting "turning point" months in the annual cycle at horizons of one to three months ahead.
    Keywords: Seasonal climate forecasting, forecast evaluation and comparison, prediction
    JEL: Q54 C22 C52 C53
    Date: 2022–06–23
    URL: http://d.repec.org/n?u=RePEc:pen:papers:22-028&r=big
  35. By: Maximilien Germain (EDF R&D OSIRIS - Optimisation, Simulation, Risque et Statistiques pour les Marchés de l’Energie - EDF R&D - EDF R&D - EDF - EDF, EDF R&D - EDF R&D - EDF - EDF, EDF - EDF, LPSM (UMR_8001) - Laboratoire de Probabilités, Statistique et Modélisation - SU - Sorbonne Université - CNRS - Centre National de la Recherche Scientifique - UPCité - Université Paris Cité); Huyên Pham (LPSM (UMR_8001) - Laboratoire de Probabilités, Statistique et Modélisation - SU - Sorbonne Université - CNRS - Centre National de la Recherche Scientifique - UPCité - Université Paris Cité, CREST - Centre de Recherche en Économie et Statistique - ENSAI - Ecole Nationale de la Statistique et de l'Analyse de l'Information [Bruz] - X - École polytechnique - ENSAE Paris - École Nationale de la Statistique et de l'Administration Économique - CNRS - Centre National de la Recherche Scientifique, FiME Lab - Laboratoire de Finance des Marchés d'Energie - Université Paris Dauphine-PSL - PSL - Université Paris sciences et lettres - CREST - EDF R&D - EDF R&D - EDF - EDF); Xavier Warin (EDF R&D OSIRIS - Optimisation, Simulation, Risque et Statistiques pour les Marchés de l’Energie - EDF R&D - EDF R&D - EDF - EDF, EDF R&D - EDF R&D - EDF - EDF, EDF - EDF, FiME Lab - Laboratoire de Finance des Marchés d'Energie - Université Paris Dauphine-PSL - PSL - Université Paris sciences et lettres - CREST - EDF R&D - EDF R&D - EDF - EDF)
    Abstract: We consider the control of McKean-Vlasov dynamics (or mean-field control) with probabilistic state constraints. We rely on a level-set approach which provides a representation of the constrained problem in terms of an unconstrained one with exact penalization and running maximum or integral cost. The method is then extended to the common noise setting. Our work extends (Bokanowski, Picarelli, and Zidani, SIAM J. Control Optim. 54.5 (2016), pp. 2568–2593) and (Bokanowski, Picarelli, and Zidani, Appl. Math. Optim. 71 (2015), pp. 125–163) to a mean-field setting. The reformulation as an unconstrained problem is particularly suitable for the numerical resolution of the problem, that is achieved from an extension of a machine learning algorithm from (Carmona, Laurière, arXiv:1908.01613 to appear in Ann. Appl. Prob., 2019). A first application concerns the storage of renewable electricity in the presence of mean-field price impact and another one focuses on a mean-variance portfolio selection problem with probabilistic constraints on the wealth. We also illustrate our approach for a direct numerical resolution of the primal Markowitz continuous-time problem without relying on duality.
    Keywords: mean-field control,state constraints,neural networks
    Date: 2022
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-03498263&r=big
  36. By: Ruochen Xiao; Qiaochu Feng; Ruxin Deng
    Abstract: Models trained under assumptions in the complete market usually don't take effect in the incomplete market. This paper solves the hedging problem in incomplete market with three sources of incompleteness: risk factor, illiquidity, and discrete transaction dates. A new jump-diffusion model is proposed to describe stochastic asset prices. Three neutral networks, including RNN, LSTM, Mogrifier-LSTM are used to attain hedging strategies with MSE Loss and Huber Loss implemented and compared.As a result, Mogrifier-LSTM is the fastest model with the best results under MSE and Huber Loss.
    Date: 2022–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2211.00948&r=big
  37. By: Katarzyna Kryńska (University of Warsaw, Faculty of Economic Sciences, Quantitative Finance Research Group); Robert Ślepaczuk (University of Warsaw, Faculty of Economic Sciences, Quantitative Finance Research Group, Department of Quantitative Finance)
    Abstract: This thesis investigates the use of various architectures of the LSTM model in algorithmic investment strategies. LSTM models are used to generate buy/sell signals, with previous levels of Bitcoin price and the S&P 500 Index value as inputs. Four approaches are tested: two are regression problems (price level prediction) and the other two are classification problems (prediction of price direction). All approaches are applied to daily, hourly, and 15-minute data and are using a walk-forward optimization procedure. The out-of-sample period for the S&P 500 Index is from February 6, 2014 to November 27, 2020, and for Bitcoin it is from January 15, 2014 to December 1, 2020. We discover that classification techniques beat regression methods on average, but we cannot determine if intra-day models outperform inter-day models. We come to the conclusion that the ensembling of models does not always have a positive impact on performance. Finally, a sensitivity analysis is performed to determine how changes in the main hyperparameters of the LSTM model affect strategy performance.
    Keywords: machine learning, deep learning, recurrent neural networks, LSTM, algorithmic trading, ensemble investment strategy, intra-day trading, S&P 500 Index, Bitcoin
    JEL: C4 C14 C45 C53 C58 G13
    Date: 2022
    URL: http://d.repec.org/n?u=RePEc:war:wpaper:2022-25&r=big
  38. By: International Monetary Fund
    Abstract: This technical assistance mission collaborated with the National Institute of Statistics and Informatics in Peru to incorporate big data methods into compilation of the consumer price index (CPI). This includes both prices ingested from the websites of large retailers and data recorded through in-store checkout scanners.
    Date: 2022–10–31
    URL: http://d.repec.org/n?u=RePEc:imf:imfscr:2022/332&r=big

This nep-big issue is ©2022 by Tom Coupé. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.