|
on Computational Economics |
Issue of 2019‒02‒25
eighteen papers chosen by |
By: | Francesco Lamperti (Université Panthéon-Sorbonne - Paris 1 (UP1)); Andrea Roventini (Observatoire français des conjonctures économiques); Amir Sani (Université Panthéon-Sorbonne X) |
Abstract: | Efficiently calibrating agent-based models (ABMs) to real data is an open challenge. This paper explicitly tackles parameter space exploration and calibration of ABMs by combining machine-learning and intelligent iterative sampling. The proposed approach “learns” a fast surrogate meta-model using a limited number of ABM evaluations and approximates the nonlinear relationship between ABM inputs (initial conditions and parameters) and outputs. Performance is evaluated on the Brock and Hommes (1998) asset pricing model and the “Islands” endogenous growth model Fagiolo and Dosi (2003). Results demonstrate that machine learning surrogates obtained using the proposed iterative learning procedure provide a quite accurate proxy of the true model and dramatically reduce the computation time necessary for large scale parameter space exploration and calibration. |
Keywords: | Agent based model; Calibration; Machine learning; Surrogate; Meta-model |
JEL: | C15 C52 C63 |
Date: | 2018–05 |
URL: | http://d.repec.org/n?u=RePEc:spo:wpmain:info:hdl:2441/13thfd12aa8rmplfudlgvgahff&r=all |
By: | Pinto, Jeronymo Marcondes; Marçal, Emerson Fernandes |
Abstract: | Our paper aims to evaluate two novel methods on selecting the best forecasting model or its combination based on a Machine Learning approach. The methods are based on the selection of the ”best” model, or combination of models, by crossvalidation technique, from a set of possible models. The first one is based on the seminal paper of Granger-Bates (1969) but weights are estimated by a process of cross-validation applied on the training set. The second one selects the model with the best forecasting performance in the process described above, which we called CvML (Cross-Validation Machine Learning Method). The following models are used: exponential smoothing, SARIMA, artificial neural networks and Threshold autoregression (TAR). Model specification is chosen by R packages: forecast and TSA. Both methods – CvML and MGB - are applied to these models to generate forecasts from one up to twelve periods ahead. Frequency of data is monthly. We run the forecasts exercise to the following to monthly series of Industrial Product Indices for seven countries: Canada, Brazil, Belgium, Germany, Portugal, UK and USA. The data was collected at OECD data, with 504 observations. We choose Average Forecast Combination, Granger Bates Method, MCS model, Naive and Seasonal Naive Model as benchmarks.Our results suggest that MGB did not performed well. However, CvML had a lower mean absolute error for most of countries and forecast horizons, particularly at longer horizons, surpassing all the proposed benchmarks. Similar results hold for absolute mean forecast error. |
Date: | 2019–02 |
URL: | http://d.repec.org/n?u=RePEc:fgv:eesptd:498&r=all |
By: | Habibifar, Saeed; Kashaninia, Alireza; Farokhi, Fardad |
Abstract: | Robot path planning has been one of the favorite areas for many Machine Learning researchers from the past up to date. The trajectory designed for a robot can be simple or complex. The robot must pass through obstacles which are either movable or fixed. One of the considerable ways for robot path planning in the dynamic and unknown environment is a combination of Evolutionary algorithm and Fuzzy logic. There are different kinds of evolutionary algorithms such as Genetic algorithm, Ant Colony algorithm, Colonial Competitive algorithm, etc. A new approach has been proposed in this paper for robot path planning in the dynamic and unknown environment based on both the Colonial Competitive algorithm and fuzzy rules. The implemented results of the proposed method present its superiority over previous methods which used only fuzzy logic method. |
Keywords: | Colonial competitive Algorithm, Dynamic and unknown environment, Fixed and movable obstacles, Fuzzy Logic, Robot path planning |
JEL: | L62 L63 L91 L92 L94 |
Date: | 2017–10 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:92255&r=all |
By: | Avinash Barnwal; Haripad Bharti; Aasim Ali; Vishal Singh |
Abstract: | Predicting the direction of assets have been an active area of study and a difficult task. Machine learning models have been used to build robust models to model the above task. Ensemble methods is one of them showing results better than a single supervised method. In this paper, we have used generative and discriminative classifiers to create the stack, particularly 3 generative and 9 discriminative classifiers and optimized over one-layer Neural Network to model the direction of price cryptocurrencies. Features used are technical indicators used are not limited to trend, momentum, volume, volatility indicators, and sentiment analysis has also been used to gain useful insight combined with the above features. For Cross-validation, Purged Walk forward cross-validation has been used. In terms of accuracy, we have done a comparative analysis of the performance of Ensemble method with Stacking and Ensemble method with blending. We have also developed a methodology for combined features importance for the stacked model. Important indicators are also identified based on feature importance. |
Date: | 2019–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1902.07855&r=all |
By: | Donovan Platt |
Abstract: | Interest in agent-based models of financial markets and the wider economy has increased consistently over the last few decades, in no small part due to their ability to reproduce a number of empirically-observed stylised facts that are not easily recovered by more traditional modelling approaches. Nevertheless, the agent-based modelling paradigm faces mounting criticism, focused particularly on the rigour of current validation and calibration practices, most of which remain qualitative and stylised fact-driven. While the literature on quantitative and data-driven approaches has seen significant expansion in recent years, most studies have focused on the introduction of new calibration methods that are neither benchmarked against existing alternatives nor rigorously tested in terms of the quality of the estimates they produce. We therefore compare a number of prominent ABM calibration methods, both established and novel, through a series of computational experiments in an attempt to determine the respective strengths and weaknesses of each approach and the overall quality of the resultant parameter estimates. We find that Bayesian estimation, though less popular in the literature, consistently outperforms frequentist, objective function-based approaches and results in reasonable parameter estimates in many contexts. Despite this, we also find that agent-based model calibration techniques require further development in order to definitively calibrate large-scale models. We therefore make suggestions for future research. |
Date: | 2019–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1902.05938&r=all |
By: | Ali Hirsa; Tugce Karatas; Amir Oskoui |
Abstract: | We apply supervised deep neural networks (DNNs) for pricing and calibration of both vanilla and exotic options under both diffusion and pure jump processes with and without stochastic volatility. We train our neural network models under different number of layers, neurons per layer, and various different activation functions in order to find which combinations work better empirically. For training, we consider various different loss functions and optimization routines. We demonstrate that deep neural networks exponentially expedite option pricing compared to commonly used option pricing methods which consequently make calibration and parameter estimation super fast. |
Date: | 2019–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1902.05810&r=all |
By: | Jean-Charles Richard; Thierry Roncalli |
Abstract: | This article develops the theory of risk budgeting portfolios, when we would like to impose weight constraints. It appears that the mathematical problem is more complex than the traditional risk budgeting problem. The formulation of the optimization program is particularly critical in order to determine the right risk budgeting portfolio. We also show that numerical solutions can be found using methods that are used in large-scale machine learning problems. Indeed, we develop an algorithm that mixes the method of cyclical coordinate descent (CCD), alternating direction method of multipliers (ADMM), proximal operators and Dykstra's algorithm. This theoretical body is then applied to some investment problems. In particular, we show how to dynamically control the turnover of a risk parity portfolio and how to build smart beta portfolios based on the ERC approach by improving the liquidity of the portfolio or reducing the small cap bias. Finally, we highlight the importance of the homogeneity property of risk measures and discuss the related scaling puzzle. |
Date: | 2019–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1902.05710&r=all |
By: | Hasumi, Ryo; Iiboshi, Hirokuni |
Abstract: | Abstract This paper estimates heterogeneous agent New Keynesian (HANK) model for US and Japan through three aggregate observations: real GDP, inflation and interest rate, by adopting combination of easy-to-use computational method for solving the model, developed by Ahn, Kaplan, Moll, Winberry and Wolf (2019), and sequential Monte Carlo (SMC) method with Kalman filter applied for Bayesian estimation with parallel computing. The combination make us enjoy the estimation of HANK just using a Laptop PC, e.g., Mac Book Pro, with MATLAB, neither many-core server computer nor FORTRUN language. We show estimation results of one Asset HANK model, i.e., impulse response, fluctuations of distributions of heterogeneous agent as well as historical decomposition for both countries. Even though using the same model, different data draws different pictures. |
Keywords: | Heterogeneous Agent model, Linearization, Model Reduction, Bayesian estimation, Sequential Monte Carlo, Kalman Filter |
JEL: | C32 E12 E21 E32 E43 E52 E62 |
Date: | 2019–02–20 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:92292&r=all |
By: | Mauro Napoletano (Observatoire français des conjonctures économiques) |
Abstract: | This article discusses recent advances in agent-based modelling applied to macroeconomic analysis. I first introduce the building blocks of agent-based models. Furthermore, by relying on examples taken from recent works, I argue that that agent-based models may provide complementary or new lights with respect to more standard models on key macroeconomic issues like endogenous business cycles, the interactions between business cycles and long-run growth, and the role of price vs. quantity adjustments in the return to full employment. Finally, I discuss some limits of agent-based models and how they are currently addressed in the literature. |
Keywords: | Agent based models; Macroeconomic analysis; Endogenous business cycles; Short and long run dynamics; Monetary and fiscal policies; Price vs quantity adjustments |
Date: | 2018–09 |
URL: | http://d.repec.org/n?u=RePEc:spo:wpmain:info:hdl:2441/2qdhj5485p93jrnf08s1meeap9&r=all |
By: | Michela Giorcelli; Nicola Lacetera; Astrid Marinoni |
Abstract: | We study the interplay between scientific progress and culture through text analysis on a corpus of about eight million books, with the use of techniques and algorithms from machine learning. We focus on a specific scientific breakthrough, the theory of evolution through natural selection by Charles Darwin, and examine the diffusion of certain key concepts that characterized this theory in the broader cultural discourse and social imaginary. We find that some concepts in Darwin’s theory, such as Evolution, Survival, Natural Selection and Competition diffused in the cultural discourse immediately after the publication of On the Origins of Species. Other concepts such as Selection and Adaptation were already present in the cultural dialogue. Moreover, we document semantic changes for most of these concepts over time. Our findings thus show a complex relation between two key factors of long-term economic growth – science and culture. Considering the evolution of these two factors jointly can offer new insights to the study of the determinants of economic development, and machine learning is a promising tool to explore these relationships. |
Keywords: | science, culture, economic history, text analysis, machine learning |
JEL: | C19 C89 N00 O00 O39 Z19 |
Date: | 2019 |
URL: | http://d.repec.org/n?u=RePEc:ces:ceswps:_7499&r=all |
By: | Denis Demidov; Klaus M. Frahm; Dima L. Shepelyansky |
Abstract: | We analyze the influence and interactions of 60 largest world banks for 195 world countries using the reduced Google matrix algorithm for the English Wikipedia network with 5 416 537 articles. While the top asset rank positions are taken by the banks of China, with China Industrial and Commercial Bank of China at the first place, we show that the network influence is dominated by USA banks with Goldman Sachs being the central bank. We determine the network structure of interactions of banks and countries and PageRank sensitivity of countries to selected banks. We also present GPU oriented code which significantly accelerates the numerical computations of reduced Google matrix. |
Date: | 2019–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1902.07920&r=all |
By: | Bhattarai, Keshab; Nguyen, Dung T.K.; Nguyen, Chan V |
Abstract: | The study applies a multi-sector multi-household static general equilibrium tax model to assess economy-wide impacts of taxes in Vietnam. It examines two tax reform scenarios based on the tax reform plan proposed by the Vietnam Ministry of Finance. The first scenario is increasing 20% from the current Value-Added Tax (VAT) rate. The second scenario relates to setting a competitive Corporate Income Tax (CIT) rate to the lowest rate in ASEAN countries. In general correction of current tax distortions will have positive impacts on labour supply, utility, consumption, output and welfare of households as they reallocate resources from more to less productive sectors of the economy. The CGE model allows to find the impacts on microeconomic and macroeconomic variables including employments, output, prices and capital stock as well as on welfare of households of an increase in the standard VAT rate from 10 to 12% and reduction in the CIT rate from 20 to 17% as considered by the current government. This study contributes to the literature on the CGE model for Vietnam economy, it is also a small step towards finding the optimal tax structure in Vietnam. |
Keywords: | Tax reform; general equilibrium; tax analysis; Vietnam |
JEL: | C68 D58 E62 H3 |
Date: | 2018–10–31 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:92068&r=all |
By: | Bettendorf, Timo; Heinlein, Reinhold |
Abstract: | This paper presents a new approach for modelling the connectedness between asset returns. We adapt the measure of Diebold and Y¸lmaz (2014), which is based on the forecast error variance decomposition of a VAR model. However, their connectedness measure hinges on critical assumptions with regard to the variance-covariance matrix of the error terms. We propose to use a more agnostic empirical approach, based on a machine learning algorithm, to identify the contemporaneous structure. In a Monte Carlo study we compare the different connectedness measures and discuss their advantages and disadvantages. In an empirical application we analyse the connectedness between the G10 currencies. Our results suggest that the US dollar as well as the Norwegian krone are the most independent currencies in our sample. By contrast, the Swiss franc and New Zealand dollar have a negligible impact on other currencies. Moreover, a cluster analysis suggests that the currencies can be divided into three groups, which we classify as: commodity currencies, European currencies and safe haven/carry trade financing currencies. |
Keywords: | connectedness,networks,graph theory,vector autoregression,exchange rates |
JEL: | C32 C51 F31 |
Date: | 2019 |
URL: | http://d.repec.org/n?u=RePEc:zbw:bubdps:062019&r=all |
By: | Lechner, Michael |
Abstract: | Uncovering the heterogeneity of causal effects of policies and business decisions at various levels of granularity provides substantial value to decision makers. This paper develops new estimation and inference procedures for multiple treatment models in a selection-on-observables framework by modifying the Causal Forest approach suggested by Wager and Athey (2018). The new esti-mators have desirable theoretical and computational properties for various aggregation levels of the causal effects. An Empirical Monte Carlo study shows that they may outperform previously suggested estimators. Inference tends to be accurate for effects relating to larger groups and conservative for effects relating to fine levels of granularity. An application to the evaluation of an active labour mar-ket programme shows the value of the new methods for applied research. |
Keywords: | average treatment effects; causal forests; Causal machine learning; conditional aver-age treatment effects; multiple treatments; selection-on-observable; statistical learning |
JEL: | C21 J68 |
Date: | 2019–01 |
URL: | http://d.repec.org/n?u=RePEc:cpr:ceprdp:13430&r=all |
By: | Scheller, Fabian; Johanning, Simon; Bruckner, Thomas |
Abstract: | Modeling the diffusion of innovations is a very challenging task, as there are various influencing factors to consider. At the same time, insights into the diffusion process can help decision makers to detect weak points of potential business models. In the literature, various models and methodologies that might tackle this problem are presented. Between these, empirically grounded agent-based modeling turned out to be one of the most promising approaches. However, the current culture is dominated by papers that fail to document critical methodological details. Thus, existing agent-based models for real-world analysis differ extensively in their design and grounding and therefore also in their predictions and conclusions. Additionally, the selection of modeling aspects seems too often be ad hoc without any defendable rationale. Concerning this matter, to draw on experiences could guide the researcher. This research paper seeks to synthesize relevant publications at the interface of empirical grounding, agent-based modeling and innovation diffusion to provide an overview of the existing body of knowledge. The major aim is to assess existing approaches regarding development procedure, entity and dynamics consideration and theoretical grounding to suggest a future research agenda. This might lead to the development of more robust models. According to the findings of this review, future work needs to focus on generic design, model coupling, research consistency, modular testing, actor involvement, behavior modeling, network foundation, and data transparency. In a subsequent step and based on the findings, a novel model approach needs to be designed and implemented. |
Keywords: | Innovation diffusion models,Agent-based models,Empirically grounded models,Data driven models,Literature review |
Date: | 2019 |
URL: | http://d.repec.org/n?u=RePEc:zbw:iirmco:012019&r=all |
By: | Hadrien De March (CMAP - Centre de Mathématiques Appliquées - Ecole Polytechnique - X - École polytechnique - CNRS - Centre National de la Recherche Scientifique); Pierre Henry-Labordere (SOCIETE GENERALE - Equity Derivatives Research Societe Generale - Société Générale) |
Abstract: | We consider the classical problem of building an arbitrage-free implied volatility surface from bid-ask quotes. We design a fast numerical procedure, for which we prove the convergence, based on the Sinkhorn algorithm that has been recently used to solve efficiently (martingale) optimal transport problems. |
Date: | 2019–02–08 |
URL: | http://d.repec.org/n?u=RePEc:hal:wpaper:hal-02011533&r=all |
By: | Mohammad Ghaderi; Milosz Kadzinsky |
Abstract: | A common approach in decision analysis is to infer a preference model in form of a value function from the holistic decision examples. This paper introduces an analytical framework for joint estimation of preferences of a group of decision makers through uncovering structural patterns that regulate general shapes of individual value functions. We investigated the impact of incorporating information on such structural patterns governing the general shape of value functions on the preference estimation process through an extensive simulation study and analysis of real decision makers’ preferences. We found that accounting for structural patterns at the group level vastly improves predictive performance of the constructed value functions at the individual level. This finding is confirmed across a wide range of decision scenarios. Moreover, improvement in the predictive performance is larger when considering the entire ranking of alternatives rather than the top choice, but it is not affected by the level of heterogeneity among the decision makers. We also found that improvement in the predictive performance in ranking problems is independent of individual characteristics of decision makers, and is larger when smaller amount of preference information is available, while for choice problems this improvement is individual-specific and invariant to the amount of input preference information. |
Keywords: | value function, decision analysis, convex optimization, simulation, structural patterns |
JEL: | C44 C13 C53 D90 |
Date: | 2019–01 |
URL: | http://d.repec.org/n?u=RePEc:upf:upfgen:1634&r=all |
By: | Visser, T.R.; Savelsbergh, M.W.P. |
Abstract: | Time slot management refers to the design and control of the delivery time slots offered to customers during the online ordering process. Strategic time slot management is an innovative variant in which only a single time slot is offered each day of the week and a priori delivery routes are used to guide time slot availability. Strategic time slot management simplifies time slot control and fulfillment center operations. We propose a 2-stage stochastic programming formulation for the design of a priori delivery routes and time slot assignments and a sample average approximation algorithm for its solution. An efficient dynamic program is developed for calculating the expected revenue of an a priori route. An extensive computational study demonstrate the efficacy of the proposed approach and provides insights in to the benefits of strategic time slot management. |
Keywords: | online grocery retailing, home delivery, time slot management, a priori routing, dynamic programming, sample average approximation |
Date: | 2019–01–01 |
URL: | http://d.repec.org/n?u=RePEc:ems:eureir:114947&r=all |