|
on Computational Economics |
Issue of 2020‒02‒10
twelve papers chosen by |
By: | Oscar Claveria (AQR-IREA, University of Barcelona); Ivana Lolic (University of Zagreb); Enric Monte (Polytechnic University of Catalunya); Salvador Torra (Riskcenter-IREA, University of Barcelona); Petar Soric (University of Zagreb) |
Abstract: | In this study we construct quarterly consumer confidence indicators of unemployment for the euro area using as input the consumer expectations for sixteen socio-demographic groups elicited from the Joint Harmonised EU Consumer Survey. First, we use symbolic regressions to link unemployment rates to qualitative expectations about a wide range of economic variables. By means of genetic programming we obtain the combination of expectations that best tracks the evolution of unemployment for each group of consumers. Second, we test the out-of-sample forecasting performance of the evolved expressions. Third, we use a state-space model with time-varying parameters to identify the main macroeconomic drivers of unemployment confidence and to evaluate whether the strength of the interplay between variables varies across the economic cycle. We analyse the differences across groups, obtaining better forecasts for respondents comprised in the first quartile with regards to the income of the household and respondents with at least secondary education. We also find that the questions regarding expected major purchases over the next 12 months and savings at present are by far, the variables that most frequently appear in the evolved expressions, hinting at their predictive potential to track the evolution of unemployment. For the economically deprived consumers, the confidence indicator seems to evolve independently of the macroeconomy. This finding is rather consistent throughout the economic cycle, with the exception of stock market returns, which governed unemployment confidence in the pre-crisis period. |
Keywords: | Unemployment, Expectations, Consumer behaviour, Forecasting, Genetic programming, State-space models yield. JEL classification: C51, C53, C55, D12, E24, E27, J10 |
Date: | 2020–01 |
URL: | http://d.repec.org/n?u=RePEc:aqr:wpaper:202001&r=all |
By: | Yafei Han; Christopher Zegras; Francisco Camara Pereira; Moshe Ben-Akiva |
Abstract: | Discrete choice models (DCMs) and neural networks (NNs) can complement each other. We propose a neural network embedded choice model - TasteNet-MNL, to improve the flexibility in modeling taste heterogeneity while keeping model interpretability. The hybrid model consists of a TasteNet module: a feed-forward neural network that learns taste parameters as flexible functions of individual characteristics; and a choice module: a multinomial logit model (MNL) with manually specified utility. TasteNet and MNL are fully integrated and jointly estimated. By embedding a neural network into a DCM, we exploit a neural network's function approximation capacity to reduce specification bias. Through special structure and parameter constraints, we incorporate expert knowledge to regularize the neural network and maintain interpretability. On synthetic data, we show that TasteNet-MNL can recover the underlying non-linear utility function, and provide predictions and interpretations as accurate as the true model; while examples of logit or random coefficient logit models with misspecified utility functions result in large parameter bias and low predictability. In the case study of Swissmetro mode choice, TasteNet-MNL outperforms benchmarking MNLs' predictability; and discovers a wider spectrum of taste variations within the population, and higher values of time on average. This study takes an initial step towards developing a framework to combine theory-based and data-driven approaches for discrete choice modeling. |
Date: | 2020–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2002.00922&r=all |
By: | Wheeler, Andrew Palmer (University of Texas at Dallas); Steenbeek, Wouter |
Abstract: | Objectives: We illustrate how a machine learning algorithm, Random Forests, can provide accurate long-term predictions of crime at micro places relative to other popular techniques. We also show how recent advances in model summaries can help to open the ‘black box’ of Random Forests, considerably improving their interpretability. Methods: We generate long-term crime forecasts for robberies in Dallas at 200 by 200 feet grid cells that allow spatially varying associations of crime generators and demographic factors across the study area. We then show how using interpretable model summaries facilitate understanding the model’s inner workings. Results: We find that Random Forests greatly outperform Risk Terrain Models and Kernel Density Estimation in terms of forecasting future crimes using different measures of predictive accuracy, but only slightly outperform using prior counts of crime. We find different factors that predict crime are highly non-linear and vary over space. Conclusions: We show how using black-box machine learning models can provide accurate micro placed based crime predictions, but still be interpreted in a manner that fosters understanding of why a place is predicted to be risky. Data and code to replicate the results can be downloaded from https://www.dropbox.com/sh/b3n9a6z5xw14r d6/AAAjqnoMVKjzNQnWP9eu7M1ra?dl=0 |
Date: | 2020–01–18 |
URL: | http://d.repec.org/n?u=RePEc:osf:socarx:xc538&r=all |
By: | Hans Buehler (JP Morgan); Lukas Gonon (ETH Zurich); Josef Teichmann (ETH Zurich; Swiss Finance Institute); Ben Wood (JP Morgan Chase); Baranidharan Mohan (JP Morgan); Jonathan Kochems (JP Morgan) |
Abstract: | This article discusses a new application of reinforcement learning: to the problem of hedging a portfolio of “over-the-counter” derivatives under under market frictions such as trading costs and liquidity constraints. It is an extended version of our recent work https://www.ssrn.com/abstract=3120710, here using notation more common in the machine learning literature. The objective is to maximize a non-linear risk-adjusted return function by trading in liquid hedging instruments such as equities or listed options. The approach presented here is the first efficient and model-independent algorithm which can be used for such problems at scale. |
Keywords: | Reinforcement Learning, Imperfect Hedging, Derivatives Pricing, Derivatives Hedging, Deep Learning |
JEL: | C61 C58 |
Date: | 2019–05 |
URL: | http://d.repec.org/n?u=RePEc:chf:rpseri:rp1980&r=all |
By: | Turchin, Peter; Korotayev, Andrey |
Abstract: | This article revisits the prediction, made in 2010, that the 2010–2020 decade would likely be a period of growing instability in the United States and Western Europe (Turchin 2010). This prediction was based on a computational model that quantified in the USA such structural-demographic forces for instability as popular immiseration, intraelite competition, and state weakness prior to 2010. Using these trends as inputs, the model calculated and projected forward in time the Political Stress Index, which in the past was strongly correlated with socio-political instability. Ortmans et al. (2017) conducted a similar structural-demographic study for the United Kingdom and obtained similar results. Here we use the Cross-National Time-Series Data Archive for the US, UK, and Western European countries to assess these structural-demographic predictions. We find that such measures of socio-political instability as anti-government demonstrations and riots increased dramatically during the 2010–2020 decade in all of these countries. |
Date: | 2020–01–12 |
URL: | http://d.repec.org/n?u=RePEc:osf:socarx:7ahqn&r=all |
By: | Boeing, Geoff (Northeastern University) |
Abstract: | Computational notebooks offer researchers, practitioners, students, and educators the ability to interactively conduct analytics and disseminate reproducible workflows that weave together code, visuals, and narratives. This article explores the potential of computational notebooks in urban analytics and planning, demonstrating their utility through a case study of OSMnx and its tutorials repository. OSMnx is a Python package for working with OpenStreetMap data and modeling, analyzing, and visualizing street networks anywhere in the world. Its official demos and tutorials are distributed as open-source Jupyter notebooks on GitHub. This article showcases this resource by documenting the repository and demonstrating OSMnx interactively through a synoptic tutorial adapted from the repository. It illustrates how to download urban data and model street networks for various study sites, compute network indicators, visualize street centrality, calculate routes, and work with other spatial data such as building footprints and points of interest. Computational notebooks help introduce methods to new users and help researchers reach broader audiences interested in learning from, adapting, and remixing their work. Due to their utility and versatility, the ongoing adoption of computational notebooks in urban planning, analytics, and related geocomputation disciplines should continue into the future. |
Date: | 2020–01–13 |
URL: | http://d.repec.org/n?u=RePEc:osf:socarx:dxtq3&r=all |
By: | Sidra Mehtab; Jaydip Sen |
Abstract: | Prediction of future movement of stock prices has been a subject matter of many research work. In this work, we propose a hybrid approach for stock price prediction using machine learning and deep learning-based methods. We select the NIFTY 50 index values of the National Stock Exchange of India, over a period of four years, from January 2015 till December 2019. Based on the NIFTY data during the said period, we build various predictive models using machine learning approaches, and then use those models to predict the Close value of NIFTY 50 for the year 2019, with a forecast horizon of one week. For predicting the NIFTY index movement patterns, we use a number of classification methods, while for forecasting the actual Close values of NIFTY index, various regression models are built. We, then, augment our predictive power of the models by building a deep learning-based regression model using Convolutional Neural Network with a walk-forward validation. The CNN model is fine-tuned for its parameters so that the validation loss stabilizes with increasing number of iterations, and the training and validation accuracies converge. We exploit the power of CNN in forecasting the future NIFTY index values using three approaches which differ in number of variables used in forecasting, number of sub-models used in the overall models and, size of the input data for training the models. Extensive results are presented on various metrics for all classification and regression models. The results clearly indicate that CNN-based multivariate forecasting model is the most effective and accurate in predicting the movement of NIFTY index values with a weekly forecast horizon. |
Date: | 2020–01 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2001.09769&r=all |
By: | Kambale Kavese (Eastern Cape Socio Economic Consultation Council); Andrew Phiri (Department of Economics, Nelson Mandela University) |
Abstract: | This study employs a partial general equilibrium approach calibrated on the Social Accounting Matrix (SAM) and a contemporaneous dynamic computable general equilibrium (CGE) model to assess the effect of expansionary fiscal policy on economic growth, income inequality, poverty, employment and inequality reduction in South Africa. The simulation results reveal that expansionary fiscal policy i) benefits rich ‘white’ households the most and poor ‘coloured’ households the least ii) improves adult employment more than youth employment iii) improves employment in urban areas as proposed to employment in rural areas iv) has a very small effect on improving economic growth and reducing the Gini coefficient v) benefits ‘well-off’ households more than it does ‘poor’ households vi) promotes ‘low-skilled’ employment more than it does for ‘high-skilled’ labourers. Associated policy implications based on our findings are also discussed. |
Keywords: | Social Accounting Matrix (SAM); Computable General Equilibrium (CGE); New Development Plan (NDP); Inequality; Poverty; Employment; South Africa. |
JEL: | C68 D58 E16 I32 |
Date: | 2020–01 |
URL: | http://d.repec.org/n?u=RePEc:mnd:wpaper:2001&r=all |
By: | Leland Bybee; Bryan T. Kelly; Asaf Manela; Dacheng Xiu |
Abstract: | We propose an approach to measuring the state of the economy via textual analysis of business news. From the full text content of 800,000 Wall Street Journal articles for 1984{2017, we estimate a topic model that summarizes business news as easily interpretable topical themes and quantifies the proportion of news attention allocated to each theme at each point in time. We then use our news attention estimates as inputs into statistical models of numerical economic time series. We demonstrate that these text-based inputs accurately track a wide range of economic activity measures and that they have incremental forecasting power for macroeconomic outcomes, above and beyond standard numerical predictors. Finally, we use our model to retrieve the news-based narratives that underly “shocks” in numerical economic data. |
JEL: | C43 C55 C58 C82 E0 E17 E32 G0 G1 |
Date: | 2020–01 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:26648&r=all |
By: | Axenbeck, Janna; Breithaupt, Patrick |
Abstract: | Web-based innovation indicators may provide new insights into firm-level innovation activities. However, little is known yet about the accuracy and relevance of web-based information. In this study, we use 4,485 German firms from the Mannheim Innovation Panel (MIP) 2019 to analyze which website characteristics are related to innovation activities at the firm level. Website characteristics are measured by several text mining methods and are used as features in different Random Forest classification models that are compared against each other. Our results show that the most relevant website characteristics are the website's language, the number of subpages, and the total text length. Moreover, our website characteristics show a better performance for the prediction of product innovations and innovation expenditures than for the prediction of process innovations. |
Keywords: | text as data,innovation indicators,machine learning |
JEL: | C53 C81 C83 O30 |
Date: | 2019 |
URL: | http://d.repec.org/n?u=RePEc:zbw:zewdip:19063&r=all |
By: | Patrick Gagliardini (USI Università della Svizzera italiana; Swiss Finance Institute); Hao Ma (USI Università della Svizzera italiana; Swiss Finance Institute, Students) |
Abstract: | This paper deals with identification and inference on the unobservable conditional factor space and its dimension in large unbalanced panels of asset returns. The model specification is nonparametric regarding the way the loadings vary in time as functions of common shocks and individual characteristics. The number of active factors can also be time-varying as an effect of the changing macroeconomic environment. The method deploys Instrumental Variables (IV) which have full-rank covariation with the factor betas in the cross-section. It allows for a large dimension of the vector generating the conditioning information by machine learning techniques. In an empirical application, we infer the conditional factor space in the panel of monthly returns of individual stocks in the CRSP dataset between January 1971 and December 2017. |
Keywords: | Large Panel, Unobservable Factors, Conditioning Information, Instrumental Variables, Machine Learning, Post-Lasso, Artificial Neural Networks |
JEL: | G12 |
Date: | 2019–07 |
URL: | http://d.repec.org/n?u=RePEc:chf:rpseri:rp1965&r=all |
By: | Masae, Makusee |
Abstract: | This dissertation develops several efficient order picker routing policies for manual picker-to-parts order picking systems. This work consists of six chapters and is structured as follows. Chapter 1 provides a brief introduction of the dissertation. Chapter 2 then presents the results of a systematic review of research on order picker routing. First, it identifies order picker routing policies in a systematic search of the literature and then develops a conceptual framework for categorizing the various policies. Order picker routing policies identified during the literature search are then descriptively analyzed and discussed in light of the developed framework. Our discussion of the state-of-knowledge of order picker routing shows that there is potential for future research to develop exact algorithms and heuristics for the routing of order pickers, both for order picking in specific scenarios and/or for non-conventional warehouses. One result of the literature review is that prior research on order picker routing always assumed that the picking tour starts and ends at the same location, which is usually the depot. In practice, however, it does not necessarily start and end at the same location, for example in case picking tours are updated in real time while they are being completed. Therefore, Chapter 3 proposes an exact algorithm as well as a routing heuristic for a conventional warehouse with two blocks where the starting and ending points of the picking tour are not fixed to the depot, but where they can be any locations in the warehouse instead. This chapter extends an earlier work of Löffler et al. (2018), who studied the case of a conventional warehouse with a single block, and adapts the solution procedures proposed by Ratliff and Rosenthal (1983) and Roodbergen and de Koster (2001a) that are both based on graph theory and dynamic programming procedure. Chapter 3 also develops a routing heuristic, denoted S*-shape, for solving the order picker routing problem in this scenario. In computational experiments, we compare the performance of the proposed routing heuristic to the exact algorithm. Our results indicate that the exact algorithm obtained tours that were between 6.32% and 35.34% shorter than those generated by the heuristic. One of the observations of Chapter 2 is that the order picker routing problem in non-conventional warehouses has not received much attention yet. Therefore, Chapter 4 studies the problem of routing an order picker in a non-conventional warehouse that has been referred to as the chevron warehouse in the literature. We propose an optimal order picker routing policy based on the solution procedures proposed by Ratliff and Rosenthal (1983) and Roodbergen and de Koster (2001a). Moreover, we modify three simple routing heuristics, namely the chevron midpoint, chevron largest gap, and chevron S-shape heuristics. The average order picking tour lengths resulting from the exact algorithm and the three routing heuristics were compared to evaluate the performance of the routing heuristics under various demand distributions and storage assignment policies used in warehouses. The results indicate that the picking tours resulting from the exact algorithm are 10.29% to 39.08% shorter than the picking tours generated by the routing heuristics. Chapter 5 then proposes an exact order picker routing algorithm for another non-conventional warehouse referred to as the leaf warehouse, and it again uses the concepts of Ratliff and Rosenthal (1983) and Roodbergen and de Koster (2001a). Moreover, it proposes four simple routing heuristics, referred to as the leaf S-shape, leaf return, leaf midpoint, and leaf largest gap heuristics. Similar to Chapter 4, we evaluate the performance of these heuristics compared to the exact algorithm for various demand distributions and storage assignment policies. Our results show that the picking tours resulting from the exact algorithm were, on average, between 3.96% to 43.68% shorter than the picking tours generated by the routing heuristics. Finally, Chapter 6 concludes the dissertation and presents an outlook on future research opportunities. |
Date: | 2020 |
URL: | http://d.repec.org/n?u=RePEc:dar:wpaper:119000&r=all |