|
on Computational Economics |
Issue of 2020‒11‒23
23 papers chosen by |
By: | Andrea Vandin; Daniele Giachini; Francesco Lamperti; Francesca Chiaromonte |
Abstract: | This paper proposes a novel approach to the statistical analysis of simulation models and, especially, agent-based models (ABMs). Our main goal is to provide a fully automated and model-independent framework to inspect simulations and perform model-based counter-factual analysis that (i) is easy-to-use by the modeller, (ii) improves reproducibility of results, (iii) is as much fast as possible given the modeller's machine by exploiting multi-core architectures, (iv) automatically chooses the number of required simulations and simulation steps to reach a user-specified statistical confidence, and (v) automatically runs a variety of statistical tests that are often overlooked. In particular, the proposed approach allows distinguishing the transient dynamics of the model from its steady state behaviour (if any), to estimate properties of the model in both ''phases'' and to equip the results with statistical guarantees, allowing also for robust comparison of model behaviours across computational experiments. The approach instantiates a family of analysis techniques from the computer science community known as statistical model checking, by redesigning and extending the statistical model checker MultiVeStA. The authors showcase the usefulness of the approach within two models from the literature: a large scale macro financial ABM and a small scale prediction market model obtaining new insights on the studied models and identifying and fixing erroneous analysis from previous publications. |
Keywords: | ABM; Automated and Distributed Simulation-based Analysis; Statistical Model Checking; Steady-state and Transient analysis; Warmup estimation; T-test and power; Prediction markets; Macro ABM. |
Date: | 2020–11–09 |
URL: | http://d.repec.org/n?u=RePEc:ssa:lemwps:2020/31&r=all |
By: | Jérôme Lelong (DAO - Données, Apprentissage et Optimisation - LJK - Laboratoire Jean Kuntzmann - Inria - Institut National de Recherche en Informatique et en Automatique - CNRS - Centre National de la Recherche Scientifique - UGA [2020-....] - Université Grenoble Alpes [2020-....] - Grenoble INP [2020-....] - Institut polytechnique de Grenoble - Grenoble Institute of Technology [2020-....] - UGA [2020-....] - Université Grenoble Alpes [2020-....]); Zineb El Filali Ech-Chafiq (Natixis Asset Management, DAO - Données, Apprentissage et Optimisation - LJK - Laboratoire Jean Kuntzmann - Inria - Institut National de Recherche en Informatique et en Automatique - CNRS - Centre National de la Recherche Scientifique - UGA [2020-....] - Université Grenoble Alpes [2020-....] - Grenoble INP [2020-....] - Institut polytechnique de Grenoble - Grenoble Institute of Technology [2020-....] - UGA [2020-....] - Université Grenoble Alpes [2020-....]); Adil Reghai (Natixis Asset Management) |
Abstract: | Many pricing problems boil down to the computation of a high dimensional integral, which is usually estimated using Monte Carlo. In fact, the accuracy of a Monte Carlo estimator with M simulations is given by σ √ M. Meaning that its convergence is immune to the dimension of the problem. However, this convergence can be relatively slow depending on the variance σ of the function to be integrated. To resolve such a problem, one would perform some variance reduction techniques such as importance sampling, stratification, or control variates. In this paper, we will study two approaches for improving the convergence of Monte Carlo using Neural Networks. The first approach relies on the fact that many high dimensional financial problems are of low effective dimensions[15]. We expose a method to reduce the dimension of such problems in order to keep only the necessary variables. The integration can then be done using fast numerical integration techniques such as Gaussian quadrature. The second approach consists in building an automatic control variate using neural networks. We learn the function to be integrated (which incorporates the diffusion model plus the payoff function) in order to build a network that is highly correlated to it. As the network that we use can be integrated exactly, we can use it as a control variate. |
Date: | 2020–11–05 |
URL: | http://d.repec.org/n?u=RePEc:hal:wpaper:hal-02891798&r=all |
By: | Pooja Gupta; Angshul Majumdar; Emilie Chouzenoux; Giovanni Chierchia |
Abstract: | This work proposes a supervised multi-channel time-series learning framework for financial stock trading. Although many deep learning models have recently been proposed in this domain, most of them treat the stock trading time-series data as 2-D image data, whereas its true nature is 1-D time-series data. Since the stock trading systems are multi-channel data, many existing techniques treating them as 1-D time-series data are not suggestive of any technique to effectively fusion the information carried by the multiple channels. To contribute towards both of these shortcomings, we propose an end-to-end supervised learning framework inspired by the previously established (unsupervised) convolution transform learning framework. Our approach consists of processing the data channels through separate 1-D convolution layers, then fusing the outputs with a series of fully-connected layers, and finally applying a softmax classification layer. The peculiarity of our framework - SuperDeConFuse (SDCF), is that we remove the nonlinear activation located between the multi-channel convolution layers and the fully-connected layers, as well as the one located between the latter and the output layer. We compensate for this removal by introducing a suitable regularization on the aforementioned layer outputs and filters during the training phase. Specifically, we apply a logarithm determinant regularization on the layer filters to break symmetry and force diversity in the learnt transforms, whereas we enforce the non-negativity constraint on the layer outputs to mitigate the issue of dead neurons. This results in the effective learning of a richer set of features and filters with respect to a standard convolutional neural network. Numerical experiments confirm that the proposed model yields considerably better results than state-of-the-art deep learning techniques for real-world problem of stock trading. |
Date: | 2020–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2011.04364&r=all |
By: | Ogawa, Shogo; Sasaki, Hiroaki |
Abstract: | This study presents a monetary disequilibrium growth model and conducts numerical simulations to investigate how dynamic paths are affected by the initial conditions and the parameters of expectation formation. The main results are as follows. First, dynamic properties such as stable convergence and cyclical fluctuations depend on the type of expectation formation rather than on the initial regimes. Stable convergence takes an excessively long time when expectation formation is too rational and cyclical fluctuations appear when it is too adaptive. Second, when the economy converges to the steady state (i.e., the Walrasian equilibrium), persistent Keynesian unemployment is likely to appear along the dynamic path. Third, the dynamics of inflation expectation that contain the price dynamics in the feedback loop might play an important role in convergence to the steady state. |
Keywords: | Disequilibrium macroeconomics; Non-Walrasian analysis; Economic growth; Simulation |
JEL: | E12 E17 E40 O42 |
Date: | 2020–10–29 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:103845&r=all |
By: | Matteo Gardini; Piergiacomo Sabino; Emanuela Sasso |
Abstract: | Using the concept of self-decomposable subordinators introduced in Gardini et al. [11], we build a new bivariate Normal Inverse Gaussian process that can capture stochastic delays. In addition, we also develop a novel path simulation scheme that relies on the mathematical connection between self-decomposable Inverse Gaussian laws and L\'evy-driven Ornstein-Uhlenbeck processes with Inverse Gaussian stationary distribution. We show that our approach provides an improvement to the existing simulation scheme detailed in Zhang and Zhang [23] because it does not rely on an acceptance-rejection method. Eventually, these results are applied to the modelling of energy markets and to the pricing of spread options using the proposed Monte Carlo scheme and Fourier techniques |
Date: | 2020–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2011.04256&r=all |
By: | Friederike Wall; Stephan Leitner |
Abstract: | Agent-based computational economics (ACE) - while adopted comparably widely in other domains of managerial science - is a rather novel paradigm for management accounting research (MAR). This paper provides an overview of opportunities and difficulties that ACE may have for research in management accounting and, in particular, introduces a framework that researchers in management accounting may employ when considering ACE as a paradigm for their particular research endeavor. The framework builds on the two interrelated paradigmatic elements of ACE: a set of theoretical assumptions on economic agents and the approach of agent-based modeling. Particular focus is put on contrasting opportunities and difficulties of ACE in comparison to other research methods employed in MAR. |
Date: | 2020–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2011.03297&r=all |
By: | Qing Yang; Zhenning Hong; Ruyan Tian; Tingting Ye; Liangliang Zhang |
Abstract: | In this paper, we document a novel machine learning based bottom-up approach for static and dynamic portfolio optimization on, potentially, a large number of assets. The methodology overcomes many major difficulties arising in current optimization schemes. For example, we no longer need to compute the covariance matrix and its inverse for mean-variance optimization, therefore the method is immune from the estimation error on this quantity. Moreover, no explicit calls of optimization routines are needed. Applications to a bottom-up mean-variance-skewness-kurtosis or CRRA (Constant Relative Risk Aversion) optimization with short-sale portfolio constraints in both simulation and real market (China A-shares and U.S. equity markets) environments are studied and shown to perform very well. |
Date: | 2020–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2011.00572&r=all |
By: | Tadeu A. Ferreira |
Abstract: | Inspired by the developments in deep generative models, we propose a model-based RL approach, coined Reinforced Deep Markov Model (RDMM), designed to integrate desirable properties of a reinforcement learning algorithm acting as an automatic trading system. The network architecture allows for the possibility that market dynamics are partially visible and are potentially modified by the agent's actions. The RDMM filters incomplete and noisy data, to create better-behaved input data for RL planning. The policy search optimisation also properly accounts for state uncertainty. Due to the complexity of the RKDF model architecture, we performed ablation studies to understand the contributions of individual components of the approach better. To test the financial performance of the RDMM we implement policies using variants of Q-Learning, DynaQ-ARIMA and DynaQ-LSTM algorithms. The experiments show that the RDMM is data-efficient and provides financial gains compared to the benchmarks in the optimal execution problem. The performance improvement becomes more pronounced when price dynamics are more complex, and this has been demonstrated using real data sets from the limit order book of Facebook, Intel, Vodafone and Microsoft. |
Date: | 2020–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2011.04391&r=all |
By: | Mariano Zeron; Ignacio Ruiz |
Abstract: | This paper presents how to use Chebyshev Tensors to compute dynamic sensitivities of financial instruments within a Monte Carlo simulation. Dynamic sensitivities are then used to compute Dynamic Initial Margin as defined by ISDA (SIMM). The technique is benchmarked against the computation of dynamic sensitivities obtained by using pricing functions like the ones found in risk engines. We obtain high accuracy and computational gains for FX swaps and Spread Options. |
Date: | 2020–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2011.04544&r=all |
By: | Zhaowei She; Zilong Wang; Turgay Ayer; Asmae Toumi; Jagpreet Chhatwal |
Abstract: | Rapid and accurate detection of community outbreaks is critical to address the threat of resurgent waves of COVID-19. A practical challenge in outbreak detection is balancing accuracy vs. speed. In particular, while accuracy improves with estimations based on longer fitting windows, speed degrades. This paper presents a machine learning framework to balance this tradeoff using generalized random forests (GRF), and applies it to detect county level COVID-19 outbreaks. This algorithm chooses an adaptive fitting window size for each county based on relevant features affecting the disease spread, such as changes in social distancing policies. Experiment results show that our method outperforms any non-adaptive window size choices in 7-day ahead COVID-19 outbreak case number predictions. |
Date: | 2020–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2011.01219&r=all |
By: | Esther Rolf; Jonathan Proctor; Tamma Carleton; Ian Bolliger; Vaishaal Shankar; Miyabi Ishihara; Benjamin Recht; Solomon Hsiang |
Abstract: | Combining satellite imagery with machine learning (SIML) has the potential to address global challenges by remotely estimating socioeconomic and environmental conditions in data-poor regions, yet the resource requirements of SIML limit its accessibility and use. We show that a single encoding of satellite imagery can generalize across diverse prediction tasks (e.g. forest cover, house price, road length). Our method achieves accuracy competitive with deep neural networks at orders of magnitude lower computational cost, scales globally, delivers label super-resolution predictions, and facilitates characterizations of uncertainty. Since image encodings are shared across tasks, they can be centrally computed and distributed to unlimited researchers, who need only fit a linear regression to their own ground truth data in order to achieve state-of-the-art SIML performance. |
JEL: | C02 C8 O13 O18 Q5 R1 |
Date: | 2020–11 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:28045&r=all |
By: | Olivier Guéant (CES - Centre d'économie de la Sorbonne - UP1 - Université Panthéon-Sorbonne - CNRS - Centre National de la Recherche Scientifique); Iuliia Manziuk (CES - Centre d'économie de la Sorbonne - UP1 - Université Panthéon-Sorbonne - CNRS - Centre National de la Recherche Scientifique); Jiang Pu |
Abstract: | When firms want to buy back their own shares, they have a choice between several alternatives. If they often carry out open market repurchase, they also increasingly rely on banks through complex buyback contracts involving option components, e.g. accelerated share repurchase contracts, VWAP-minus profit-sharing contracts, etc. The entanglement between the execution problem and the option hedging problem makes the management of these contracts a difficult task that should not boil down to simple Greek-based risk hedging, contrary to what happens with classical books of options. In this paper, we propose a machine learning method to optimally manage several types of buyback contract. In particular, we recover strategies similar to those obtained in the literature with partial differential equation and recombinant tree methods and show that our new method, which does not suffer from the curse of dimensionality, enables to address types of contract that could not be addressed with grid or tree methods. |
Keywords: | ASR contracts,Optimal stopping,Stochastic optimal control,Deep learning,Recurrent neural networks,Reinforcement learning |
Date: | 2020–11–04 |
URL: | http://d.repec.org/n?u=RePEc:hal:cesptp:hal-02987889&r=all |
By: | Lardinois, Christian; Hirou, Catherine; D'Avignon, Jacques |
Keywords: | Public Economics |
Date: | 2020–10–22 |
URL: | http://d.repec.org/n?u=RePEc:ags:ctrf20:305894&r=all |
By: | Brian Quistorff; Gentry Johnson |
Abstract: | Restricting randomization in the design of experiments (e.g., using blocking/stratification, pair-wise matching, or rerandomization) can improve the treatment-control balance on important covariates and therefore improve the estimation of the treatment effect, particularly for small- and medium-sized experiments. Existing guidance on how to identify these variables and implement the restrictions is incomplete and conflicting. We identify that differences are mainly due to the fact that what is important in the pre-treatment data may not translate to the post-treatment data. We highlight settings where there is sufficient data to provide clear guidance and outline improved methods to mostly automate the process using modern machine learning (ML) techniques. We show in simulations using real-world data, that these methods reduce both the mean squared error of the estimate (14%-34%) and the size of the standard error (6%-16%). |
Date: | 2020–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2010.15966&r=all |
By: | Elizabeth Fons; Paula Dawson; Xiao-jun Zeng; John Keane; Alexandros Iosifidis |
Abstract: | Stock classification is a challenging task due to high levels of noise and volatility of stocks returns. In this paper we show that using transfer learning can help with this task, by pre-training a model to extract universal features on the full universe of stocks of the S$\&$P500 index and then transferring it to another model to directly learn a trading rule. Transferred models present more than double the risk-adjusted returns than their counterparts trained from zero. In addition, we propose the use of data augmentation on the feature space defined as the output of a pre-trained model (i.e. augmenting the aggregated time-series representation). We compare this augmentation approach with the standard one, i.e. augmenting the time-series in the input space. We show that augmentation methods on the feature space leads to $20\%$ increase in risk-adjusted return compared to a model trained with transfer learning but without augmentation. |
Date: | 2020–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2011.04545&r=all |
By: | Kazuya Kaneko; Koichi Miyamoto; Naoyuki Takeda; Kazuyoshi Yoshino |
Abstract: | Monte Carlo integration using quantum computers has been widely investigated, including applications to concrete problems. It is known that quantum algorithms based on quantum amplitude estimation (QAE) can compute an integral with a smaller number of iterative calls of the quantum circuit which calculates the integrand, than classical methods call the integrand subroutine. However, the issues about the iterative operations in the integrand circuit have not been discussed so much. That is, in the high-dimensional integration, many random numbers are used for calculation of the integrand and in some cases similar calculations are repeated to obtain one sample value of the integrand. In this paper, we point out that we can reduce the number of such repeated operations by a combination of the nested QAE and the use of pseudorandom numbers (PRNs), if the integrand has the separable form with respect to contributions from distinct random numbers. The use of PRNs, which the authors originally proposed in the context of the quantum algorithm for Monte Carlo, is the key factor also in this paper, since it enables parallel computation of the separable terms in the integrand. Furthermore, we pick up one use case of this method in finance, the credit portfolio risk measurement, and estimate to what extent the complexity is reduced. |
Date: | 2020–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2011.02165&r=all |
By: | Diunugala, Hemantha Premakumara; Mombeuil, Claudel |
Abstract: | Purpose: This study compares three different methods to predict foreign tourist arrivals (FTAs) to Sri Lanka from top-ten countries and also attempts to find the best-fitted forecasting model for each country using five model performance evaluation criteria. Methods: This study employs two different univariate-time-series approaches and one Artificial Intelligence (AI) approach to develop models that best explain the tourist arrivals to Sri Lanka from the top-ten tourist generating countries. The univariate-time series approach contains two main types of statistical models, namely Deterministic Models and Stochastic Models. Results: The results show that Winter’s exponential smoothing and ARIMA are the best methods to forecast tourist arrivals to Sri Lanka. Furthermore, the results show that the accuracy of the best forecasting model based on MAPE criteria for the models of India, China, Germany, Russia, and Australia fall between 5 to 9 percent, whereas the accuracy levels of models for the UK, France, USA, Japan, and the Maldives fall between 10 to 15 percent. Implications: The overall results of this study provide valuable insights into tourism management and policy development for Sri Lanka. Successful forecasting of FTAs for each market source provide a practical planning tool to destination decision-makers. |
Keywords: | foreign tourist arrivals, winter’s exponential smoothing, ARIMA, simple recurrent neural network, Sri Lanka |
JEL: | C45 C5 Z0 |
Date: | 2020–10–30 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:103779&r=all |
By: | Jeffrey Cohen; Clark Alexander |
Abstract: | We analyze 3,171 US common stocks to create an efficient portfolio based on the Chicago Quantum Net Score (CQNS) and portfolio optimization. We begin with classical solvers and incorporate quantum annealing. We add a simulated bifurcator as a new classical solver and the new D-Wave Advantage(TM) quantum annealing computer as our new quantum solver. |
Date: | 2020–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2011.01308&r=all |
By: | Allison Koenecke; Hal Varian |
Abstract: | As more tech companies engage in rigorous economic analyses, we are confronted with a data problem: in-house papers cannot be replicated due to use of sensitive, proprietary, or private data. Readers are left to assume that the obscured true data (e.g., internal Google information) indeed produced the results given, or they must seek out comparable public-facing data (e.g., Google Trends) that yield similar results. One way to ameliorate this reproducibility issue is to have researchers release synthetic datasets based on their true data; this allows external parties to replicate an internal researcher's methodology. In this brief overview, we explore synthetic data generation at a high level for economic analyses. |
Date: | 2020–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2011.01374&r=all |
By: | Maria Concetta Ambra (Department of Social Sciences and Economics, Sapienza University of Rome) |
Abstract: | This article focuses on Amazon Mechanical Turk (AMT), the crowdsourcing platform created by Amazon, with the aim to enrich our knowledge of this specific platform and to contribute to the debate on ‘platform economy’. In light of the massive changes triggered by the new digital revolution, many scholars have recently examined how platform work has changed, by exploring transformations in employee status and the new content of platform work. This article addresses two interrelated questions: to what extent and in what ways does AMT chal-lenge the boundaries between paid and unpaid digital labour? How does AMT exploit online labour to extract surplus value? The research was undertaken from December 2018 and July 2019, through the collection of 50 doc-uments originating from three Amazon web sites. These documents have been examined though the technique of content analysis by adopting the NVivo software. In conclusion, it explains how Amazon has been able to develop a hybrid system of human-machine work. This specific model can be also fruitful used to speed up the machine learning process and to make it more accurate. |
Keywords: | Amazon Mechanical Turk; Crowdsourcing Platform; Digital Piecework; Intellectual Property Rights; Machine Learning |
JEL: | J30 J83 D20 O30 |
Date: | 2020–11 |
URL: | http://d.repec.org/n?u=RePEc:saq:wpaper:19/20&r=all |
By: | Michael Herty; Sonja Steffensen; Anna Th\"unen |
Abstract: | We present a linear--quadratic Stackelberg game with a large number of followers and we also derive the mean field limit of infinitely many followers. The relation between optimization and mean-field limit is studied and conditions for consistency are established. Finally, we propose a numerical method based on the derived models and present numerical results. |
Date: | 2020–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2011.03405&r=all |
By: | Vanessa Alviarez; Keith Head; Thierry Mayer |
Abstract: | We assess the consequences for consumers in 76 countries of multinational acquisitions in beer and spirits. Outcomes depend on how changes in ownership affect markups versus efficiency. We find that owner fixed effects contribute very little to the performance of brands. On average, foreign ownership tends to raise costs and lower appeal. Using the estimated model, we simulate the consequences of counterfactual national merger regulation. The US beer price index would have been 4-7% higher without divestitures. Up to 30% savings could have been obtained in Latin America by emulating the pro-competition policies of the US and EU. |
Keywords: | mergers;markups;globalisation;competition |
JEL: | F12 F23 L13 |
Date: | 2020–11 |
URL: | http://d.repec.org/n?u=RePEc:cii:cepidt:2020-13&r=all |
By: | Constandina Koki; Stefanos Leonardos; Georgios Piliouras |
Abstract: | In this paper, we consider a variety of multi-state Hidden Markov models for predicting and explaining the Bitcoin, Ether and Ripple returns in the presence of state (regime) dynamics. In addition, we examine the effects of several financial, economic and cryptocurrency specific predictors on the cryptocurrency return series. Our results indicate that the 4-states Non-Homogeneous Hidden Markov model has the best one-step-ahead forecasting performance among all the competing models for all three series. The superiority of the predictive densities, over the single regime random walk model, relies on the fact that the states capture alternating periods with distinct returns' characteristics. In particular, we identify bull, bear and calm regimes for the Bitcoin series, and periods with different profit and risk magnitudes for the Ether and Ripple series. Finally, we observe that conditionally on the hidden states, the predictors have different linear and non-linear effects. |
Date: | 2020–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2011.03741&r=all |