|
on Computational Economics |
Issue of 2021‒07‒12
fourteen papers chosen by |
By: | Piotr Borowski (Faculty of Economic Sciences, University of Warsaw); Marcin Chlebus (Faculty of Economic Sciences, University of Warsaw) |
Abstract: | Horse racing was the source of many researchers considerations who studied market efficiency and applied complex mathematic formulas to predict their results. We were the first who compared the selected machine learning methods to create a profitable betting strategy for two common bets, Win and Quinella. The six classification algorithms under the different betting scenarios were used, namely Classification and Regression Tree (CART), Generalized Linear Model (Glmnet), Extreme Gradient Boosting (XGBoost), Random Forest (RF), Neural Network (NN) and Linear Discriminant Analysis (LDA). Additionally, the Variable Importance was applied to determine the leading horse racing factors. The data were collected from the flat racetracks in Poland from 2011-2020 and described 3,782 Arabian and Thoroughbred races in total. We managed to profit under specific circumstances and get a correct bets ratio of 41% for the Win bet and over 36% for the Quinella bet using LDA and Neural Networks. The results demonstrated that it was possible to bet effectively using the chosen methods and indicated a possible market inefficiency. |
Keywords: | horse racing prediction, racetrack betting, Thoroughbred and Arabian flat racing, machine learning, Variable Importance |
JEL: | C53 C55 C45 |
Date: | 2021 |
URL: | http://d.repec.org/n?u=RePEc:war:wpaper:2021-13&r= |
By: | Gennaro Catapano (Bank of Italy); Francesco Franceschi (Bank of Italy); Valentina Michelangeli (Bank of Italy); Michele Loberto (Bank of Italy) |
Abstract: | In this paper, we extend and calibrate with Italian data the Agent-based model of the real estate sector described in Baptista et al., 2016. We design a novel calibration methodology that is built on a multivariate moment-based measure and a set of three search algorithms: a low discrepancy series, a machine learning surrogate and a genetic algorithm. The calibrated and validated model is then used to evaluate the effects of three hypothetical borrower-based macroprudential policies: an 80 per cent loan-to-value cap, a 30 per cent cap on the loan-service-to-income ratio and a combination of both policies. We find that, within our framework, these policy interventions tend to slow down the credit cycle and reduce the probability of defaults on mortgages. However, with respect to the Italian housing market, we only find very small effects over a five-year horizon on both property prices and mortgage defaults. This latter result is consistent with the view that the Italian household sector is financially sound. Finally, we find that restrictive policies lead to a shift in demand toward lower quality dwellings. |
Keywords: | agent based model, housing market, macroprudential policy |
JEL: | D1 D31 E58 R2 R21 R31 |
Date: | 2021–06 |
URL: | http://d.repec.org/n?u=RePEc:bdi:wptemi:td_1338_21&r= |
By: | Shafiullah Qureshi (Department of Economics, Carleton University); Ba Chu (Department of Economics, Carleton University); Fanny S. Demers (Department of Economics, Carleton University) |
Abstract: | This paper applies state-of-the-art machine learning (ML) algorithms to forecast monthly real GDP growth in Canada by using both Google Trends (GT) data and official macroeconomic data (which are available ahead of the release of GDP data by Statistics Canada). We show that we can forecast real GDP growth accurately ahead of the release of GDP figures by using GT and official data (such as employment) as predictors. We first pre-select features by applying up-to-date techniques, namely, XGBoost’s variable importance score, and a recent variable-screening procedure for time series data, namely, PDC-SIS+. These pre-selected features are then used to build advanced ML models for forecasting real GDP growth, by employing tree-based ensemble algorithms, such as XGBoost, LightGBM, Random Forest, and GBM. We provide empirical evidence that the variables pre-selected by either PDC-SIS+ or the XGBoost’s variable importance score can have a superior forecasting ability. We find that the pre-selected GT data features perform as well as the pre-selected official data features with respect to short-term forecasting ability, while the pre-selected official data features are superior with respect to long-term forecasting ability. We also find that (1) the ML algorithms we employ often perform better with a large sample than with a small sample, even when the small sample has a larger set of predictors; and (2) the Random Forest (that often produces nonlinear models to capture nonlinear patterns in the data) tends to under-perform a standard autoregressive model in several cases while there is no clear evidence that the XGBoost and the LightGBM can always outperform each other. |
Date: | 2021–05–17 |
URL: | http://d.repec.org/n?u=RePEc:car:carecp:21-05&r= |
By: | Janos Gabler; Tobias Raabe; Klara Röhrl; Hans-Martin von Gaudecker |
Abstract: | In order to slow the spread of the CoViD-19 pandemic, governments around the world have enacted a wide set of policies limiting the transmission of the disease. Initially, these focused on non-pharmaceutical interventions; more recently, vaccinations and large-scale rapid testing have started to play a major role. The objective of this study is to explain the quantitative effects of these policies on determining the course of the pandemic, allowing for factors like seasonality or virus strains with different transmission profiles. To do so, the study develops an agent-based simulation model, which is estimated using data for the second and the third wave of the CoViD-19 pandemic in Germany. The paper finds that during a period where vaccination rates rose from 5% to 40%, rapid testing had the largest effect on reducing infection numbers. Frequent large-scale rapid testing should remain part of strategies to contain CoViD-19; it can substitute for many non-pharmaceutical interventions that come at a much larger cost to individuals, society, and the economy. |
Keywords: | CoViD-19, agent based simulation model, rapid testing, nonpharmaceutical interventions |
JEL: | C63 I18 |
Date: | 2021–06 |
URL: | http://d.repec.org/n?u=RePEc:bon:boncrc:crctr224_2021_302&r= |
By: | Maria Ludovica Drudi (Bank of Italy); Stefano Nobili (Bank of Italy) |
Abstract: | The paper develops an early warning system to identify banks that could face liquidity crises. To obtain a robust system for measuring banks’ liquidity vulnerabilities, we compare the predictive performance of three models – logistic LASSO, random forest and Extreme Gradient Boosting – and of their combination. Using a comprehensive dataset of liquidity crisis events between December 2014 and January 2020, our early warning models’ signals are calibrated according to the policymaker's preferences between type I and II errors. Unlike most of the literature, which focuses on default risk and typically proposes a forecast horizon ranging from 4 to 6 quarters, we analyse liquidity risk and we consider a 3-month forecast horizon. The key finding is that combining different estimation procedures improves model performance and yields accurate out-of-sample predictions. The results show that the combined models achieve an extremely low percentage of false negatives, lower than the values usually reported in the literature, while at the same time limiting the number of false positives. |
Keywords: | banking crisis, early warning models, liquidity risk, lender of last resort, machine learning |
JEL: | C52 C53 G21 E58 |
Date: | 2021–06 |
URL: | http://d.repec.org/n?u=RePEc:bdi:wptemi:td_1337_21&r= |
By: | Szymon Lis (Faculty of Economic Sciences, University of Warsaw); Marcin Chlebus (Faculty of Economic Sciences, University of Warsaw) |
Abstract: | No model dominates existing VaR forecasting comparisons. This problem may be solved by combine forecasts. This study investigates the daily volatility forecasting for commodities (gold, silver, oil, gas, copper) from 2000-2020 and identifies the source of performance improvements between individual GARCH models and combining forecasts methods (mean, the lowest, the highest, CQOM, quantile regression with the elastic net or LASSO regularization, random forests, gradient boosting, neural network) through the MCS. Results indicate that individual models achieve more accurate VaR forecasts for the confidence level of 0.975, but combined forecasts are more precise for 0.99. In most cases simple combining methods (mean or the lowest VaR) are the best. Such evidence demonstrates that combining forecasts is important to get better results from the existing models. The study shows that combining the forecasts allows for more accurate VaR forecasting, although it’s difficult to find accurate, complex methods. |
Keywords: | Combining forecasts, Econometric models, Finance, Financial markets, GARCH models, Neural networks, Regression, Time series, Risk, Value-at-Risk, Machine learning, Model Confidence Set |
JEL: | C51 C52 C53 G32 Q01 |
Date: | 2021 |
URL: | http://d.repec.org/n?u=RePEc:war:wpaper:2021-11&r= |
By: | Giovanni Dosi (LEM - Laboratory of Economics and Management - SSSUP - Scuola Universitaria Superiore Sant'Anna [Pisa]); Andrea Roventini; Emmanuele Russo (SSSUP - Scuola Universitaria Superiore Sant'Anna [Pisa]) |
Abstract: | In this paper, we study the effects of industrial policies on international convergence using a multi-country agent-based model which builds upon Dosi et al. (2019b). The model features a group of microfounded economies, with evolving industries, populated by heterogeneous firms that compete in international markets. In each country, technological change is driven by firms' activities of search and innovation, while aggregate demand formation and distribution follows Keynesian dynamics. Interactions among countries take place via trade flows and international technological imitation. We employ the model to assess the different strategies that laggard countries can adopt to catch up with leaders: market-friendly policies;industrial policies targeting the development of firms' capabilities and R&D investments, as well as trade restrictions for infant industry protection; protectionist policies focusing on tariffs only. We find that markets cannot do the magic: in absence of government interventions, laggards will continue to fall behind. On the contrary, industrial policies can successfully drive international convergence among leaders and laggards, while protectionism alone is not necessary to support catching up and countries get stuck in a sort of middle-income trap. Finally, in a global trade war, where developed economies impose retaliatory tariffs, both laggards and leaders are worse off and world productivity growth slows down. |
Keywords: | Endogenous growth,Catching up,Technology-gaps,Industrial policies,Agent-based models |
Date: | 2020–05–06 |
URL: | http://d.repec.org/n?u=RePEc:hal:wpaper:hal-03242369&r= |
By: | Cerqua, Augusto; Letta, Marco |
Abstract: | This paper assesses the impact of the first wave of the pandemic on the local economies of one of the hardest-hit countries, Italy. We combine quarterly local labor market data with the new machine learning control method for counterfactual building. Our results document that the economic effects of the COVID-19 shock are dramatically unbalanced across the Italian territory and spatially uncorrelated with the epidemiological pattern of the first wave. The heterogeneity of employment losses is associated with exposure to social aggregation risks and pre-existing labor market fragilities. Finally, we quantify the protective role played by the labor market interventions implemented by the government and show that, while effective, they disproportionately benefitted the most developed Italian regions. Such diverging trajectories and unequal policy effects call for a place-based policy approach that promptly addresses the uneven economic geography of the current crisis. |
Keywords: | impact evaluation,counterfactual approach,machine learning,local labor markets,COVID-19,Italy |
JEL: | C53 D22 E24 R12 |
Date: | 2021 |
URL: | http://d.repec.org/n?u=RePEc:zbw:glodps:875&r= |
By: | Buchali, Katrin |
Abstract: | With the advent of big data, unique opportunities arise for data collection and analysis and thus for personalized pricing. We simulate a self-learning algorithm setting personalized prices based on additional information about consumer sensitivities in order to analyze market outcomes for consumers who have a preference for fair, equitable outcomes. For this purpose, we compare a situation that does not consider fairness to a situation in which we allow for inequity-averse consumers. We show that the algorithm learns to charge different, revenue-maximizing prices and simultaneously increase fairness in terms of a more homogeneous distribution of prices. |
Keywords: | pricing algorithm,reinforcement learning,Q-learning,price discrimi-nation,fairness,inequity |
JEL: | D63 D91 L12 |
Date: | 2021 |
URL: | http://d.repec.org/n?u=RePEc:zbw:hohdps:022021&r= |
By: | Niven Winchester (School of Economics, Auckland University of Technology); Dominic White (School of Economics, Auckland University of Technology) |
Abstract: | This paper documents Version 1.0 of the Climate PoLicy ANalysis (C-PLAN) model and presents results for the model’s baseline and a policy scenario. The C-PLAN model is a global, recursive dynamic computable general equilibrium (CGE) model tailored to the economic and emissions characteristics of New Zealand. Distinguishing features in the model include methane-reducing technologies for livestock, bioheat from forestry residues, and explicit representation of output-based allocations of emissions permits. The model was built for New Zealand Climate Change Commission (CCC) to inform policy advice provided to the government. The computer code for the model and instructions for reproducing results used by the CCC are publicly available. It is hoped that the C-PLAN model will assist transparency in setting climate policies, help build capacity for climate policy analysis, and ultimately set the foundations for future climate policy initiatives in New Zealand and other countries. |
Keywords: | Climate change mitigation; Computable general equilibrium; Replication; Transparency |
JEL: | C68 Q40 Q54 Q58 |
Date: | 2021–06 |
URL: | http://d.repec.org/n?u=RePEc:aut:wpaper:202104&r= |
By: | An Chen; Motonobu Kanagawa; Fangyuan Zhang |
Abstract: | Pension reform is a crucial societal problem in many countries, and traditional pension schemes, such as Pay-As-You-Go and Defined-Benefit schemes, are being replaced by more sustainable ones. One challenge for a public pension system is the management of a systematic risk that affects all individuals in one generation (e.g., that caused by a worse economic situation). Such a risk cannot be diversified within one generation, but may be reduced by sharing with other (younger and/or older) generations, i.e., by intergenerational risk sharing (IRS). In this work, we investigate IRS in a Collective Defined-Contribution (CDC) pension system. We consider a CDC pension model with overlapping multiple generations, in which a funding-ratio-liked declaration rate is used as a means of IRS. We perform an extensive simulation study to investigate the mechanism of IRS. One of our main findings is that the IRS works particularly effectively for protecting pension participants in the worst scenarios of a tough financial market. Apart from these economic contributions, we make a simulation-methodological contribution for pension studies by employing Bayesian optimization, a modern machine learning approach to black-box optimization, in systematically searching for optimal parameters in our pension model. |
Date: | 2021–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2106.13644&r= |
By: | Goller, Daniel (University of St. Gallen); Harrer, Tamara (Institute for Employment Research (IAB), Nuremberg); Lechner, Michael (University of St. Gallen); Wolff, Joachim (Institute for Employment Research (IAB), Nuremberg) |
Abstract: | We investigate the effectiveness of three different job-search and training programmes for German long-term unemployed persons. On the basis of an extensive administrative data set, we evaluated the effects of those programmes on various levels of aggregation using Causal Machine Learning. We found participants to benefit from the investigated programmes with placement services to be most effective. Effects are realised quickly and are long-lasting for any programme. While the effects are rather homogenous for men, we found differential effects for women in various characteristics. Women benefit in particular when local labour market conditions improve. Regarding the allocation mechanism of the unemployed to the different programmes, we found the observed allocation to be as effective as a random allocation. Therefore, we propose data-driven rules for the allocation of the unemployed to the respective labour market programmes that would improve the status-quo. |
Keywords: | policy evaluation, Modified Causal Forest (MCF), active labour market programmes, conditional average treatment effect (CATE) |
JEL: | J08 J68 |
Date: | 2021–06 |
URL: | http://d.repec.org/n?u=RePEc:iza:izadps:dp14486&r= |
By: | Sukjin Han; Eric Schulman; Kristen Grauman; Santhosh Ramakrishnan |
Abstract: | Many differentiated products have key attributes that are unstructured and thus high-dimensional (e.g., design, text). Instead of treating unstructured attributes as unobservables in economic models, quantifying them can be important to answer interesting economic questions. To propose an analytical framework for this type of products, this paper considers one of the simplest design products-fonts-and investigates merger and product differentiation using an original dataset from the world's largest online marketplace for fonts. We quantify font shapes by constructing embeddings from a deep convolutional neural network. Each embedding maps a font's shape onto a low-dimensional vector. In the resulting product space, designers are assumed to engage in Hotelling-type spatial competition. From the image embeddings, we construct two alternative measures that capture the degree of design differentiation. We then study the causal e ects of a merger on the merging firm's creative decisions using the constructed measures in a synthetic control method. We find that the merger causes the merging firm to increase the visual variety of font design. Notably, such effects are not captured when using traditional measures for product offerings (e.g., specifications and the number of products) constructed from structured data. |
Date: | 2021–07–08 |
URL: | http://d.repec.org/n?u=RePEc:bri:uobdis:21/750&r= |
By: | Roberto Roson (Department of Economics, University Of Venice CÃ Foscari; Loyola Andalusia University; GREEN Bocconi University Milan); Camille Van der Vorst (Leuven Centre for Global Governance Studies, KU Leuven) |
Abstract: | This paper presents a simulation exercise undertaken with a newly available regional general equilibrium model for the Spanish region of Andalusia. The exercise is intended to assess the structural adjustment processes and impacts on the Andalusian economy directly induced by the dramatic fall in tourism expenditure which occurred in the year 2020, due to the prevention measures implemented because of the COVID-19 pandemic. We also undertake a preliminary evaluation of the impact on some environmental indicators, such as greenhouse gases emissions and air pollutants. The key insight emerging from our analysis is that the COVID crumbling of tourism demand generates very relevant distributional consequences. |
Keywords: | Tourism, Andalusia, regional economics, CGE models, COVID-19, economic impact, environmental impact |
JEL: | C68 D58 Q51 R13 |
Date: | 2021 |
URL: | http://d.repec.org/n?u=RePEc:ven:wpaper:2021:18&r= |