|
on Computational Economics |
Issue of 2022‒05‒02
29 papers chosen by |
By: | Bernardo Alves Furtado; Gustavo Onofre Andre\~ao |
Abstract: | Public Policies are not intrinsically positive or negative. Rather, policies provide varying levels of effects across different recipients. Methodologically, computational modeling enables the application of a combination of multiple influences on empirical data, thus allowing for heterogeneous response to policies. We use a random forest machine learning algorithm to emulate an agent-based model (ABM) and evaluate competing policies across 46 Metropolitan Regions (MRs) in Brazil. In doing so, we use input parameters and output indicators of 11,076 actual simulation runs and one million emulated runs. As a result, we obtain the optimal (and non-optimal) performance of each region over the policies. Optimum is defined as a combination of production and inequality indicators for the full ensemble of MRs. Results suggest that MRs already have embedded structures that favor optimal or non-optimal results, but they also illustrate which policy is more beneficial to each place. In addition to providing MR-specific policies' results, the use of machine learning to simulate an ABM reduces the computational burden, whereas allowing for a much larger variation among model parameters. The coherence of results within the context of larger uncertainty -- vis-\`a-vis those of the original ABM -- suggests an additional test of robustness of the model. At the same time the exercise indicates which parameters should policymakers intervene, in order to work towards optimum of MRs. |
Date: | 2022–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2203.02576&r= |
By: | Fedor Zagumennov (Plekhanov Russian University of Economics, Department of Industrial Economics, Moscow, Russia Author-2-Name: Andrei Bystrov Author-2-Workplace-Name: Plekhanov Russian University of Economics, Department of Industrial Economics, Moscow, Russia Author-3-Name: Alexey Radaykin Author-3-Workplace-Name: Plekhanov Russian University of Economics, Department of Industrial Economics, Moscow, Russia Author-4-Name: Author-4-Workplace-Name: Author-5-Name: Author-5-Workplace-Name: Author-6-Name: Author-6-Workplace-Name: Author-7-Name: Author-7-Workplace-Name: Author-8-Name: Author-8-Workplace-Name:) |
Abstract: | " Objective - The objective of this paper is to consider using machine learning approaches for in-firm processes prediction and to give an estimation of such values as effective production quantities. Methodology - The research methodology used is a synthesis of a deep-learning model, which is used to predict half of real business data for comparison with the remaining half. The structure of the convolutional neural network (CNN) model is provided, as well as the results of experiments with real orders, procurements, and income data. The key findings in this paper are that convolutional with a long-short-memory approach is better than a single convolutional method of prediction. Findings - This research also considers useof such technologies on business digital platforms. According to the results, there are guidelines formulated for the implementation in the particular ERP systems or web business platforms. Novelty - This paper describes the practical usage of 1-dimensional(1D) convolutional neural networks and a mixed approach with convolutional and long-short memory networks for in-firm planning tasks such as income prediction, procurements, and order demand analysis. Type of Paper - Empirical." |
Keywords: | Business; Neural, Networks; CNN; Platform |
JEL: | C45 C49 |
Date: | 2021–12–31 |
URL: | http://d.repec.org/n?u=RePEc:gtr:gatrjs:jber213&r= |
By: | Martin Magris; Mostafa Shabani; Alexandros Iosifidis |
Abstract: | The prediction of financial markets is a challenging yet important task. In modern electronically-driven markets traditional time-series econometric methods often appear incapable of capturing the true complexity of the multi-level interactions driving the price dynamics. While recent research has established the effectiveness of traditional machine learning (ML) models in financial applications, their intrinsic inability in dealing with uncertainties, which is a great concern in econometrics research and real business applications, constitutes a major drawback. Bayesian methods naturally appear as a suitable remedy conveying the predictive ability of ML methods with the probabilistically-oriented practice of econometric research. By adopting a state-of-the-art second-order optimization algorithm, we train a Bayesian bilinear neural network with temporal attention, suitable for the challenging time-series task of predicting mid-price movements in ultra-high-frequency limit-order book markets. By addressing the use of predictive distributions to analyze errors and uncertainties associated with the estimated parameters and model forecasts, we thoroughly compare our Bayesian model with traditional ML alternatives. Our results underline the feasibility of the Bayesian deep learning approach and its predictive and decisional advantages in complex econometric tasks, prompting future research in this direction. |
Date: | 2022–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2203.03613&r= |
By: | Cameron Fen; Samir Undavia |
Abstract: | We show that pooling countries across a panel dimension to macroeconomic data can improve by a statistically significant margin the generalization ability of structural, reduced form, and machine learning (ML) methods to produce state-of-the-art results. Using GDP forecasts evaluated on an out-of-sample test set, this procedure reduces root mean squared error by 12\% across horizons and models for certain reduced-form models and by 24\% across horizons for dynamic structural general equilibrium models. Removing US data from the training set and forecasting out-of-sample country-wise, we show that reduced-form and structural models are more policy-invariant when trained on pooled data, and outperform a baseline that uses US data only. Given the comparative advantage of ML models in a data-rich regime, we demonstrate that our recurrent neural network model and automated ML approach outperform all tested baseline economic models. Robustness checks indicate that our outperformance is reproducible, numerically stable, and generalizable across models. |
Date: | 2022–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2203.06540&r= |
By: | Gian Maria Campedelli |
Abstract: | Purpose: To explore the potential of Explainable Machine Learning in the prediction and detection of drivers of cleared homicides at the national- and state-levels in the United States. Methods: First, nine algorithmic approaches are compared to assess the best performance in predicting cleared homicides country-wise, using data from the Murder Accountability Project. The most accurate algorithm among all (XGBoost) is then used for predicting clearance outcomes state-wise. Second, SHAP, a framework for Explainable Artificial Intelligence, is employed to capture the most important features in explaining clearance patterns both at the national and state levels. Results: At the national level, XGBoost demonstrates to achieve the best performance overall. Substantial predictive variability is detected state-wise. In terms of explainability, SHAP highlights the relevance of several features in consistently predicting investigation outcomes. These include homicide circumstances, weapons, victims' sex and race, as well as number of involved offenders and victims. Conclusions: Explainable Machine Learning demonstrates to be a helpful framework for predicting homicide clearance. SHAP outcomes suggest a more organic integration of the two theoretical perspectives emerged in the literature. Furthermore, jurisdictional heterogeneity highlights the importance of developing ad hoc state-level strategies to improve police performance in clearing homicides. |
Date: | 2022–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2203.04768&r= |
By: | Ola Hall; Mattias Ohlsson; Thortseinn R\"ognvaldsson |
Abstract: | Recent advances in artificial intelligence and machine learning have created a step change in how to measure human development indicators, in particular asset based poverty. The combination of satellite imagery and machine learning has the capability to estimate poverty at a level similar to what is achieved with workhorse methods such as face-to-face interviews and household surveys. An increasingly important issue beyond static estimations is whether this technology can contribute to scientific discovery and consequently new knowledge in the poverty and welfare domain. A foundation for achieving scientific insights is domain knowledge, which in turn translates into explainability and scientific consistency. We review the literature focusing on three core elements relevant in this context: transparency, interpretability, and explainability and investigate how they relates to the poverty, machine learning and satellite imagery nexus. Our review of the field shows that the status of the three core elements of explainable machine learning (transparency, interpretability and domain knowledge) is varied and does not completely fulfill the requirements set up for scientific insights and discoveries. We argue that explainability is essential to support wider dissemination and acceptance of this research, and explainability means more than just interpretability. |
Date: | 2022–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2203.01068&r= |
By: | Narayana Darapaneni; Anwesh Reddy Paduri; Himank Sharma; Milind Manjrekar; Nutan Hindlekar; Pranali Bhagat; Usha Aiyer; Yogesh Agarwal |
Abstract: | Stock market prediction has been an active area of research for a considerable period. Arrival of computing, followed by Machine Learning has upgraded the speed of research as well as opened new avenues. As part of this research study, we aimed to predict the future stock movement of shares using the historical prices aided with availability of sentiment data. Two models were used as part of the exercise, LSTM was the first model with historical prices as the independent variable. Sentiment Analysis captured using Intensity Analyzer was used as the major parameter for Random Forest Model used for the second part, some macro parameters like Gold, Oil prices, USD exchange rate and Indian Govt. Securities yields were also added to the model for improved accuracy of the model. As the end product, prices of 4 stocks viz. Reliance, HDFC Bank, TCS and SBI were predicted using the aforementioned two models. The results were evaluated using RMSE metric. |
Date: | 2022–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2204.05783&r= |
By: | Ariel Neufeld; Julian Sester; Daiying Yin |
Abstract: | We present an approach, based on deep neural networks, that allows identifying robust statistical arbitrage strategies in financial markets. Robust statistical arbitrage strategies refer to self-financing trading strategies that enable profitable trading under model ambiguity. The presented novel methodology does not suffer from the curse of dimensionality nor does it depend on the identification of cointegrated pairs of assets and is therefore applicable even on high-dimensional financial markets or in markets where classical pairs trading approaches fail. Moreover, we provide a method to build an ambiguity set of admissible probability measures that can be derived from observed market data. Thus, the approach can be considered as being model-free and entirely data-driven. We showcase the applicability of our method by providing empirical investigations with highly profitable trading performances even in 50 dimensions, during financial crises, and when the cointegration relationship between asset pairs stops to persist. |
Date: | 2022–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2203.03179&r= |
By: | Jonathan Berrisch; Micha{\l} Narajewski; Florian Ziel |
Abstract: | This paper presents a method for estimating high-resolution electricity peak demand given lower resolution data. The technique won a data competition organized by the British distribution network operator Western Power Distribution. The exercise was to estimate the minimum and maximum load values in a single substation in a one-minute resolution as precisely as possible. In contrast, the data was given in half-hourly and hourly resolutions. The winning method combines generalized additive models (GAM) and deep artificial neural networks (DNN) which are popular in load forecasting. We provide an extensive analysis of the prediction models, including the importance of input parameters with a focus on load, weather, and seasonal effects. In addition, we provide a rigorous evaluation study that goes beyond the competition frame to analyze the robustness. The results show that the proposed methods are superior, not only in the single competition month but also in the meaningful evaluation study. |
Date: | 2022–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2203.03342&r= |
By: | Raad Khraishi; Ramin Okhrati |
Abstract: | We introduce a method for pricing consumer credit using recent advances in offline deep reinforcement learning. This approach relies on a static dataset and requires no assumptions on the functional form of demand. Using both real and synthetic data on consumer credit applications, we demonstrate that our approach using the conservative Q-Learning algorithm is capable of learning an effective personalized pricing policy without any online interaction or price experimentation. |
Date: | 2022–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2203.03003&r= |
By: | Javad T. Firouzjaee; Pouriya Khaliliyan |
Abstract: | Russia's attack on Ukraine on Thursday 24 February 2022 hitched financial markets and the increased geopolitical crisis. In this paper, we select some main economic indexes, such as Gold, Oil (WTI), NDAQ, and known currency which are involved in this crisis and try to find the quantitative effect of this war on them. To quantify the war effect, we use the correlation feature and the relationships between these economic indices, create datasets, and compare the results of forecasts with real data. To study war effects, we use Machine Learning Linear Regression. We carry on empirical experiments and perform on these economic indices datasets to evaluate and predict this war tolls and its effects on main economics indexes. |
Date: | 2022–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2203.01738&r= |
By: | Philippe Cotte; Pierre Lagier; Vincent Margot; Christophe Geissler |
Abstract: | This article is the result of a collaboration between Fujitsu and Advestis. This collaboration aims at refactoring and running an algorithm based on systematic exploration producing investment recommendations on a high-performance computer of the Fugaku, to see whether a very high number of cores could allow for a deeper exploration of the data compared to a cloud machine, hopefully resulting in better predictions. We found that an increase in the number of explored rules results in a net increase in the predictive performance of the final ruleset. Also, in the particular case of this study, we found that using more than around 40 cores does not bring a significant computation time gain. However, the origin of this limitation is explained by a threshold-based search heuristic used to prune the search space. We have evidence that for similar data sets with less restrictive thresholds, the number of cores actually used could very well be much higher, allowing parallelization to have a much greater effect. |
Date: | 2022–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2203.00427&r= |
By: | Jase Clarkson; Mihai Cucuringu; Andrew Elliott; Gesine Reinert |
Abstract: | In this work, we introduce DAMNETS, a deep generative model for Markovian network time series. Time series of networks are found in many fields such as trade or payment networks in economics, contact networks in epidemiology or social media posts over time. Generative models of such data are useful for Monte-Carlo estimation and data set expansion, which is of interest for both data privacy and model fitting. Using recent ideas from the Graph Neural Network (GNN) literature, we introduce a novel GNN encoder-decoder structure in which an encoder GNN learns a latent representation of the input graph, and a decoder GNN uses this representation to simulate the network dynamics. We show using synthetic data sets that DAMNETS can replicate features of network topology across time observed in the real world, such as changing community structure and preferential attachment. DAMNETS outperforms competing methods on all of our measures of sample quality over several real and synthetic data sets. |
Date: | 2022–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2203.15009&r= |
By: | Federico Cornalba; Constantin Disselkamp; Davide Scassola; Christopher Helf |
Abstract: | We investigate the potential of Multi-Objective, Deep Reinforcement Learning for stock and cryptocurrency trading. More specifically, we build on the generalized setting \`a la Fontaine and Friedman arXiv:1809.06364 (where the reward weighting mechanism is not specified a priori, but embedded in the learning process) by complementing it with computational speed-ups, and adding the cumulative reward's discount factor to the learning process. Firstly, we verify that the resulting Multi-Objective algorithm generalizes well, and we provide preliminary statistical evidence showing that its prediction is more stable than the corresponding Single-Objective strategy's. Secondly, we show that the Multi-Objective algorithm has a clear edge over the corresponding Single-Objective strategy when the reward mechanism is sparse (i.e., when non-null feedback is infrequent over time). Finally, we discuss the generalization properties of the discount factor. The entirety of our code is provided in open source format. |
Date: | 2022–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2203.04579&r= |
By: | Jagoda Kaszowska-Mojsa (Institute of Economics Polish Academy of Sciences); Przemyslaw Wlodarczyk (University of Lodz) |
Abstract: | The ongoing epidemic of COVID-19 raises numerous questions concerning the shape and range of state interventions, that are aimed at reduction of the number of infections and deaths. The lockdowns, which became the most popular response worldwide, are assessed as being an outdated and economically inefficient way to fight the disease. However, in the absence of efficient cures and vaccines they lack viable alternatives. In this paper we assess the economic consequences of epidemic prevention and control schemes that were introduced in order to respond to the COVID-19 outburst. The analyses report the results of epidemic simulations obtained with the agent-based modeling methods under different response schemes and use them in order to provide conditional forecasts of standard economic variables. The forecasts are obtained from the DSGE model with labour market component. |
Keywords: | COVID-19, agent-based modelling, dynamic stochastic general equilibrium models, scenario analyses |
JEL: | C6 D5 |
Date: | 2020–11–10 |
URL: | http://d.repec.org/n?u=RePEc:ann:wpaper:3/2020&r= |
By: | Rahul Singh; Vasilis Syrgkanis |
Abstract: | We extend the idea of automated debiased machine learning to the dynamic treatment regime. We show that the multiply robust formula for the dynamic treatment regime with discrete treatments can be re-stated in terms of a recursive Riesz representer characterization of nested mean regressions. We then apply a recursive Riesz representer estimation learning algorithm that estimates de-biasing corrections without the need to characterize how the correction terms look like, such as for instance, products of inverse probability weighting terms, as is done in prior work on doubly robust estimation in the dynamic regime. Our approach defines a sequence of loss minimization problems, whose minimizers are the mulitpliers of the de-biasing correction, hence circumventing the need for solving auxiliary propensity models and directly optimizing for the mean squared error of the target de-biasing correction. |
Date: | 2022–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2203.13887&r= |
By: | Cameron Fen |
Abstract: | This paper proposes a simulation-based deep learning Bayesian procedure for the estimation of macroeconomic models. This approach is able to derive posteriors even when the likelihood function is not tractable. Because the likelihood is not needed for Bayesian estimation, filtering is also not needed. This allows Bayesian estimation of HANK models with upwards of 800 latent states as well as estimation of representative agent models that are solved with methods that don't yield a likelihood--for example, projection and value function iteration approaches. I demonstrate the validity of the approach by estimating a 10 parameter HANK model solved via the Reiter method that generates 812 covariates per time step, where 810 are latent variables, showing this can handle a large latent space without model reduction. I also estimate the algorithm with an 11-parameter model solved via value function iteration, which cannot be estimated with Metropolis-Hastings or even conventional maximum likelihood estimators. In addition, I show the posteriors estimated on Smets-Wouters 2007 are higher quality and faster using simulation-based inference compared to Metropolis-Hastings. This approach helps address the computational expense of Metropolis-Hastings and allows solution methods which don't yield a tractable likelihood to be estimated. |
Date: | 2022–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2203.06537&r= |
By: | Mufhumudzi Muthivhi; Terence L. van Zyl |
Abstract: | The fusion of public sentiment data in the form of text with stock price prediction is a topic of increasing interest within the financial community. However, the research literature seldom explores the application of investor sentiment in the Portfolio Selection problem. This paper aims to unpack and develop an enhanced understanding of the sentiment aware portfolio selection problem. To this end, the study uses a Semantic Attention Model to predict sentiment towards an asset. We select the optimal portfolio through a sentiment-aware Long Short Term Memory (LSTM) recurrent neural network for price prediction and a mean-variance strategy. Our sentiment portfolio strategies achieved on average a significant increase in revenue above the non-sentiment aware models. However, the results show that our strategy does not outperform traditional portfolio allocation strategies from a stability perspective. We argue that an improved fusion of sentiment prediction with a combination of price prediction and portfolio optimization leads to an enhanced portfolio selection strategy. |
Date: | 2022–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2203.05673&r= |
By: | Yuanrong Wang; Tomaso Aste |
Abstract: | We propose an end-to-end architecture for multivariate time-series prediction that integrates a spatial-temporal graph neural network with a matrix filtering module. This module generates filtered (inverse) correlation graphs from multivariate time series before inputting them into a GNN. In contrast with existing sparsification methods adopted in graph neural network, our model explicitly leverage time-series filtering to overcome the low signal-to-noise ratio typical of complex systems data. We present a set of experiments, where we predict future sales from a synthetic time-series sales dataset. The proposed spatial-temporal graph neural network displays superior performances with respect to baseline approaches, with no graphical information, and with fully connected, disconnected graphs and unfiltered graphs. |
Date: | 2022–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2203.03991&r= |
By: | Resce, Giuliano; Vaquero-Pineiro, Cristina |
Abstract: | Geographical Indications (GIs), as Protected Designation of Origin (PDO) and Protected Geographical Indication (PGI), offer a unique protection scheme to preserve high-quality agri-food productions and support rural development, and they have been recognised as a powerful tool to enhance sustainable development and ecological economic transactions at the territorial level. However, not all the areas with traditional agri-food products are acknowledge with a GI. Examining the Italian wine sector by a geo-referenced and a machine learning framework, we show that municipalities which obtain a GI within the following 10 years (2002-2011) can be predicted using a large set of (lagged) municipality-level data (1981-2001). We find that the Random Forest algorithm is the best model to make out-of-sample predictions of municipalities which obtain GIs. Among the features used, the local wine growing tradition, proximity to capital cities, local employment and education rates emerge as crucial in the prediction of GI certifications. This evidence can support policy makers and stakeholders to target rural development policies and investment allocation, and it offers strong policy implications for the future reforms of this quality scheme. |
Keywords: | Geographical Indications, Rural Development, Agri-Food Production, Machine Learning, Geo-Referenced Data |
JEL: | C53 Q18 |
Date: | 2022–04–11 |
URL: | http://d.repec.org/n?u=RePEc:mol:ecsdps:esdp22082&r= |
By: | Martin Vesel\'y |
Abstract: | The main purpose of this article is to evaluate possible applications of quantum computers in foreign exchange reserves management. The capabilities of quantum computers are demonstrated by means of risk measurement using the quantum Monte Carlo method and portfolio optimization using a linear equations system solver (the Harrow-Hassidim-Lloyd algorithm) and quadratic unconstrained binary optimization (the quantum approximate optimization algorithm). All demonstrations are carried out on the cloud-based IBM Quantum(TM) platform. Despite the fact that real-world applications are impossible under the current state of development of quantum computers, it is proven that in principle it will be possible to apply such computers in FX reserves management in the future. In addition, the article serves as an introduction to quantum computing for the staff of central banks and financial market supervisory authorities. |
Date: | 2022–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2203.15716&r= |
By: | Deborah Sulem; Henry Kenlay; Mihai Cucuringu; Xiaowen Dong |
Abstract: | Dynamic networks are ubiquitous for modelling sequential graph-structured data, e.g., brain connectome, population flows and messages exchanges. In this work, we consider dynamic networks that are temporal sequences of graph snapshots, and aim at detecting abrupt changes in their structure. This task is often termed network change-point detection and has numerous applications, such as fraud detection or physical motion monitoring. Leveraging a graph neural network model, we design a method to perform online network change-point detection that can adapt to the specific network domain and localise changes with no delay. The main novelty of our method is to use a siamese graph neural network architecture for learning a data-driven graph similarity function, which allows to effectively compare the current graph and its recent history. Importantly, our method does not require prior knowledge on the network generative distribution and is agnostic to the type of change-points; moreover, it can be applied to a large variety of networks, that include for instance edge weights and node attributes. We show on synthetic and real data that our method enjoys a number of benefits: it is able to learn an adequate graph similarity function for performing online network change-point detection in diverse types of change-point settings, and requires a shorter data history to detect changes than most existing state-of-the-art baselines. |
Date: | 2022–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2203.15470&r= |
By: | Pok Wah Chan |
Abstract: | Extreme pricing anomalies may occur unexpectedly without a trivial cause, and equity traders typically experience a meticulous process to source disparate information and analyze its reliability before integrating it into the trusted knowledge base. We introduce DeepTrust, a reliable financial knowledge retrieval framework on Twitter to explain extreme price moves at speed, while ensuring data veracity using state-of-the-art NLP techniques. Our proposed framework consists of three modules, specialized for anomaly detection, information retrieval and reliability assessment. The workflow starts with identifying anomalous asset price changes using machine learning models trained with historical pricing data, and retrieving correlated unstructured data from Twitter using enhanced queries with dynamic search conditions. DeepTrust extrapolates information reliability from tweet features, traces of generative language model, argumentation structure, subjectivity and sentiment signals, and refine a concise collection of credible tweets for market insights. The framework is evaluated on two self-annotated financial anomalies, i.e., Twitter and Facebook stock price on 29 and 30 April 2021. The optimal setup outperforms the baseline classifier by 7.75% and 15.77% on F0.5-scores, and 10.55% and 18.88% on precision, respectively, proving its capability in screening unreliable information precisely. At the same time, information retrieval and reliability assessment modules are analyzed individually on their effectiveness and causes of limitations, with identified subjective and objective factors that influence the performance. As a collaborative project with Refinitiv, this framework paves a promising path towards building a scalable commercial solution that assists traders to reach investment decisions on pricing anomalies with authenticated knowledge from social media platforms in real-time. |
Date: | 2022–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2203.08144&r= |
By: | Rama Cont; Mihai Cucuringu; Renyuan Xu; Chao Zhang |
Abstract: | The estimation of loss distributions for dynamic portfolios requires the simulation of scenarios representing realistic joint dynamics of their components, with particular importance devoted to the simulation of tail risk scenarios. Commonly used parametric models have been successful in applications involving a small number of assets, but may not be scalable to large or heterogeneous portfolios involving multiple asset classes. We propose a novel data-driven approach for the simulation of realistic multi-asset scenarios with a particular focus on the accurate estimation of tail risk for a given class of static and dynamic portfolios selected by the user. By exploiting the joint elicitability property of Value-at-Risk (VaR) and Expected Shortfall (ES), we design a Generative Adversarial Network (GAN) architecture capable of learning to simulate price scenarios that preserve tail risk features for these benchmark trading strategies, leading to consistent estimators for their Value-at-Risk and Expected Shortfall. We demonstrate the accuracy and scalability of our method via extensive simulation experiments using synthetic and market data. Our results show that, in contrast to other data-driven scenario generators, our proposed scenario simulation method correctly captures tail risk for both static and dynamic portfolios. |
Date: | 2022–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2203.01664&r= |
By: | Isaiah Hull; Anna Grodecka-Messi |
Abstract: | How do property prices respond to changes in local taxes and local public services? Attempts to measure this, starting with Oates (1969), have suffered from a lack of local public service controls. Recent work attempts to overcome such data limitations through the use of quasi-experimental methods. We revisit this fundamental problem, but adopt a different empirical strategy that pairs the double machine learning estimator of Chernozhukov et al. (2018) with a novel dataset of 947 time-varying local characteristic and public service controls for all municipalities in Sweden over the 2010-2016 period. We find that properly controlling for local public service and characteristic controls more than doubles the estimated impact of local income taxes on house prices. We also exploit the unique features of our dataset to demonstrate that tax capitalization is stronger in areas with greater municipal competition, providing support for a core implication of the Tiebout hypothesis. Finally, we measure the impact of public services, education, and crime on house prices and the effect of local taxes on migration. |
Date: | 2022–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2203.14751&r= |
By: | Mike Ludkovski |
Abstract: | I develop a numerical algorithm for stochastic impulse control in the spirit of Regression Monte Carlo for optimal stopping. The approach consists in generating statistical surrogates (aka functional approximators) for the continuation function. The surrogates are recursively trained by empirical regression over simulated state trajectories. In parallel, the same surrogates are used to learn the intervention function characterizing the optimal impulse amounts. I discuss appropriate surrogate types for this task, as well as the choice of training sets. Case studies from forest rotation and irreversible investment illustrate the numerical scheme and highlight its flexibility and extensibility. Implementation in \texttt{R} is provided as a publicly available package posted on GitHub. |
Date: | 2022–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2203.06539&r= |
By: | Lotfi Boudabsa (Ecole Polytechnique Fédérale de Lausanne - School of Basic Sciences); Damir Filipović (Ecole Polytechnique Fédérale de Lausanne; Swiss Finance Institute) |
Abstract: | We introduce an ensemble learning method for dynamic portfolio valuation and risk management building on regression trees. We learn the dynamic value process of a derivative portfolio from a finite sample of its cumulative cash flow. The estimator is given in closed form. The method is fast and accurate, and scales well with sample size and path space dimension. The method can also be applied to Bermudan style options. Numerical experiments show good results in moderate dimension problems. |
Keywords: | dynamic portfolio valuation, ensemble learning, gradient boosting, random forest, regression trees, risk management, Bermudan options |
Date: | 2022–04 |
URL: | http://d.repec.org/n?u=RePEc:chf:rpseri:rp2230&r= |
By: | Vasarhelyi, Orsolya; Brooke, Siân |
Abstract: | Studying gender presents unique challenges to data science. Recent work in the spirit of computational social science returns to critical approach to operationalisation providing a fresh perspective on this important topic. In this chapter we highlight works that examines gender computationally, describing how they employ levels of feminist theory to challenge gender inequality at the micro, meso, and macro level. We argue that paying critical attention to how we infer and analyze gender is fruitfully in understanding society and the contributions of research. We also present various sources and methods to infer gender and provide examples of the application of such methods. We conclude by outlining the way forward for computational methods in how gender and intersectional inequality is studied.This is a draft. The final version will be available in Handbook of Computational Social Science edited by Taha Yasseri, forthcoming 2023, Edward Elgar Publishing Ltd. The material cannot be used for any other purpose without further permission of the publisher and is for private use only. Please cite as: Vasarhelyi, O., & Brooke, S.(2023). Computing Gender. In: T. Yasseri (Ed.), Handbook of Computational Social Science. Edward Elgar Publishing Ltd. |
Date: | 2022–04–08 |
URL: | http://d.repec.org/n?u=RePEc:osf:socarx:admcs&r= |
By: | Nils K\"orber; Maximilian R\"ohrig; Andreas Ulbig |
Abstract: | The decarbonization of municipal and district energy systems requires economic and ecologic efficient transformation strategies in a wide spectrum of technical options. Especially under the consideration of multi-energy systems, which connect energy domains such as heat and electricity supply, expansion and operational planning of so-called decentral multi-energy systems (DMES) holds a multiplicity of complexities. This motivates the use of optimization problems, which reach their limitations with regard to computational feasibility in combination with the required level of detail. With an increased focus on DMES implementation, this problem is aggravated since, moving away from the traditional system perspective, a user-centered, market-integrated perspective is assumed. Besides technical concepts it requires the consideration of market regimes, e.g. self-consumption and the broader energy sharing. This highlights the need for DMES optimization models which cover a microeconomic perspective under consideration of detailed technical options and energy regulation, in order to understand mutual technical and socio-economic and -ecologic interactions of energy policies. In this context we present a stakeholder-oriented multi-criteria optimization model for DMES, which addresses technical aspects, as well as market and services coverage towards a real-world implementation. The current work bridges a gap between the required modelling level of detail and computational feasibility of DMES expansion and operation optimization. Model detail is achieved by the application of a hybrid combination of mathematical methods in a nested multi-level decomposition approach, including a Genetic Algorithm, Benders Decomposition and Lagrange Relaxation. This also allows for distributed computation on multi-node high performance computer clusters. |
Date: | 2022–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2204.06545&r= |