nep-cmp New Economics Papers
on Computational Economics
Issue of 2018‒10‒08
thirteen papers chosen by



  1. Artificial neural network regression models: Predicting GDP growth By Jahn, Malte
  2. Temporal Relational Ranking for Stock Prediction By Fuli Feng; Xiangnan He; Xiang Wang; Cheng Luo; Yiqun Liu; Tat-Seng Chua
  3. Exact Solution of the Soft-Clustered VehicleRouting Problem By Timo Hintsch; Stefan Irnich
  4. News-based sentiment analysis in real estate: A supervised machine learning approach with support vector networks By Jochen Hausler; Marcel Lang; Jessica Ruscheinsky
  5. A Game of Tax Evasion: evidences from an agent-based model By L. S. Di Mauro; A. Pluchino; A. E. Biondo
  6. Implementing machine learning methods in Stata By Austin Nichols
  7. Applying Deep Learning to Derivatives Valuation By Ryan Ferguson; Andrew Green
  8. Automation of the technical due diligence with artificial intelligence in the real estate industry By Philipp Maximilien Mueller
  9. A New Optimal Operation Structure For Renewable- Based Microgrid Operation based On Teaching Learning Based Optimization Algorithm By Tavakoli, Amir; Mirzaei, Farzad; Tashakori, Sajad
  10. Impact Assessment of Scenarios of Interregional Transfers in Colombia By Eduardo A. Haddad; Luis A. Galvis; Inácio F. Araújo-Junior; Vinicius A.Vale
  11. Nowcasting New Zealand GDP using machine learning algorithms By Adam Richardson; Thomas van Florenstein Mulder; Tugrul Vehbi
  12. Balance Sheet Implications of the Czech National Bank's Exchange Rate Commitment By Michal Franta; Tomas Holub; Branislav Saxa
  13. An Adaptive Tabu Search Algorithm for Market Clearing Problem in Turkish Day-Ahead Market By Nermin Elif Kurt; H. Bahadir Sahin; K\"ur\c{s}ad Derinkuyu

  1. By: Jahn, Malte
    Abstract: Artificial neural networks have become increasingly popular for statistical model fitting over the last years, mainly due to increasing computational power. In this paper, an introduction to the use of artificial neural network (ANN) regression models is given. The problem of predicting the GDP growth rate of 15 industrialized economies in the time period 1996-2016 serves as an example. It is shown that the ANN model is able to yield much more accurate predictions of GDP growth rates than a corresponding linear model. In particular, ANN models capture time trends very flexibly. This is relevant for forecasting, as demonstrated by out-of-sample predictions for 2017.
    Keywords: neural network,forecasting,panel data
    JEL: C45 C53 C61 O40
    Date: 2018
    URL: http://d.repec.org/n?u=RePEc:zbw:hwwirp:185&r=cmp
  2. By: Fuli Feng; Xiangnan He; Xiang Wang; Cheng Luo; Yiqun Liu; Tat-Seng Chua
    Abstract: Stock prediction aims to predict the future trends of a stock in order to help investors to make good investment decisions. Traditional solutions for stock prediction are based on time-series models. With the recent success of deep neural networks in modeling sequential data, deep learning has become a promising choice for stock prediction. However, most existing deep learning solutions are not optimized towards the target of investment, i.e., selecting the best stock with the highest expected revenue. Specifically, they typically formulate stock prediction as a classification (to predict stock trend) or a regression problem (to predict stock price). More importantly, they largely treat the stocks as independent of each other. The valuable signal in the rich relations between stocks (or companies), such as two stocks are in the same sector and two companies have a supplier-customer relation, is not considered. In this work, we contribute a new deep learning solution, named Relational Stock Ranking (RSR), for stock prediction. Our RSR method advances existing solutions in two major aspects: 1) tailoring the deep learning models for stock ranking, and 2) capturing the stock relations in a time-sensitive manner. The key novelty of our work is the proposal of a new component in neural network modeling, named Temporal Graph Convolution, which jointly models the temporal evolution and relation network of stocks. To validate our method, we perform back-testing on the historical data of two stock markets, NYSE and NASDAQ. Extensive experiments demonstrate the superiority of our RSR method. It outperforms state-of-the-art stock prediction solutions achieving an average return ratio of 98% and 71% on NYSE and NASDAQ, respectively.
    Date: 2018–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1809.09441&r=cmp
  3. By: Timo Hintsch (Johannes Gutenberg-University); Stefan Irnich (Johannes Gutenberg-University)
    Abstract: The soft-clustered vehicle-routing problem (SoftCluVRP) extends the classical capacitated vehicle-routing problem by one additional constraint: The customers are partitioned into clusters and feasible routes must respect the soft-cluster constraint, that is, all customers of the same clusters must be served by the same vehicle. In this article, we design and analyze different branch-and-price algorithms for the exact solution of the SoftCluVRP. The algorithms differ in the way the column-generation subproblem, a variant of the shortest-path problem with resource constraints (SPPRC), is solved. The standard approach for SPPRCs is based on dynamic-programming labeling algorithms. We show that even with all the recent acceleration techniques and tricks (e.g., partial pricing, bidirectional labeling, decremental state space relaxation) available for SPPRC labeling algorithms, the solution of the subproblem remains extremely difficult. The main contribution is the modeling and solution of the subproblem using a branch-and-cut algorithm. The conducted computational experiments prove that branch-and-price equipped with this integer programmingbased approach outperforms sophisticated labeling-based algorithms by one order of magnitude. The largest SoftCluVRP instances solved to optimality have more than 400 customers or more than 50 clusters.
    Keywords: Vehicle Routing, branch-and-price, shortest-path problem with resource constraints, dynamic-programming labeling, branch-and-cut
    Date: 2018–09–24
    URL: http://d.repec.org/n?u=RePEc:jgu:wpaper:1813&r=cmp
  4. By: Jochen Hausler; Marcel Lang; Jessica Ruscheinsky
    Abstract: With the rapid growth of news, information and opinionated data available in digital form, accompanied by a swift progress of textual analysis techniques, the field of sentiment analysis became a hotspot in the area of natural language processing. Additionally, scientists can nowadays draw on increased computational power to study textual documents. These developments allowed real estate researchers to advance beyond traditional sentiment measures such as closed-end fund discounts and survey-based measures (see e.g., Lin et al. (2009) as well as Jin et al. (2014)) and facilitate the development of new sentiment proxies. As an example, Google search volume data was successfully used to forecast commercial real estate market developments (Dietzel et al. (2014)) and to predict market volatility (Braun (2016)) as well as housing market turning points (Dietzel (2016)). Using sentiment-dictionaries and content-analysis software, Walker (2014) examined the relationship of media coverage and the boom of the UK housing market. In similar fashion, Soo (2015) showed that local housing media sentiment is able to predict future house prices in US cities.However – in contrast to related research in finance – sentiment analysis in real estate still lacks behind. Real estate literature so far misses the application of more advanced machine learning techniques like supervised learning algorithms when trying to extract sentiment from news items. By facilitating a dataset of about 54,000 headlines from the S&P Global Market Intelligence database collected over a 12-year timespan (01/2005 – 12/2016), this paper examines the relationship between movements of both direct as well as indirect commercial real estate markets in the United States and media sentiment. It thereby aims to explore the performance and potential of a support vector machine as classification algorithm (see Cortes and Vapnik (1995). When mapping headlines into a high dimensional feature space, we can identify the polarity of individual news items and aggregate the results into three different sentiment measures. Controlling for other influence factors and sentiment indices, we show that these 'tone' measures indeed bear the potential to explain real estate market movements over time.To our knowledge, this paper is the first one explicitly exploring a support vector machine’s potential in extracting media sentiment not only for the United States but also for real estate markets in general.
    Keywords: Commercial Real Estate; Machine Learning; News-based sentiment analysis; Support vector networks; United States
    JEL: R3
    Date: 2018–01–01
    URL: http://d.repec.org/n?u=RePEc:arz:wpaper:eres2018_153&r=cmp
  5. By: L. S. Di Mauro; A. Pluchino; A. E. Biondo
    Abstract: This paper presents a simple agent-based model of an economic system, populated by agents playing different games according to their different view about social cohesion and tax payment. After a first set of simulations, correctly replicating results of existing literature, a wider analysis is presented in order to study the effects of a dynamic-adaptation rule, in which citizens may possibly decide to modify their individual tax compliance according to individual criteria, such as, the strength of their ethical commitment, the satisfaction gained by consumption of the public good and the perceived opinion of neighbors. Results show the presence of thresholds levels in the composition of society - between taxpayers and evaders - which explain the extent of damages deriving from tax evasion.
    Date: 2018–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1809.08146&r=cmp
  6. By: Austin Nichols (Abt Associates)
    Abstract: This presentation will discuss some popular supervised and unsupervised machine learning algorithms, and their recommended use, and then present implementations in Stata. The emphasis is on prediction and causal inference, and how to tailor a method to a specific application.
    Date: 2018–10–15
    URL: http://d.repec.org/n?u=RePEc:boc:usug18:08&r=cmp
  7. By: Ryan Ferguson; Andrew Green
    Abstract: The universal approximation theorem of artificial neural networks states that a forward feed network with a single hidden layer can approximate any continuous function, given a finite number of hidden units under mild constraints on the activation functions (see Hornik, 1991; Cybenko, 1989). Deep neural networks are preferred over shallow neural networks, as the later can be shown to require an exponentially larger number of hidden units (Telgarsky, 2016). This paper applies deep learning to train deep artificial neural networks to approximate derivative valuation functions using a basket option as an example. To do so it develops a Monte Carlo based sampling technique to derive appropriate training and test data sets. The paper explores a range of network geometries. The performance of the training phase and the inference phase are presented using GPU technology.
    Date: 2018–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1809.02233&r=cmp
  8. By: Philipp Maximilien Mueller
    Abstract: Over the real estate lifecycle numerous documents and data are generated. The majority of building-related data is collected in day-to-day operations, such as maintenance protocols, contracts or energy consumptions. Previous successes in the classification already help to automatically recognize, categorize and name documents as well as to sort them into an individual structure in digital data rooms (Bodenbender/Kurzrock 2018). The actual added value is created in the next step: efficient data analysis with specific utilization of the data.This paper describes an approach for the automation of Technical Due Diligence (TDD) by information extraction (IE). The aim is to extract relevant information from building-related documents and to automatically gain quick insights into the condition of real estate. A global asset under management (AuM) of US$1.2 trillion (PWC, AWM Report, 2017) and a global real estate transaction volume of around US$650 billion in 2016 (JLL Global Market Perspective, 2017) show that there is a regular need to analyze building data. Transactions are a very dynamic area where current trends focus on a more data-driven approach to improve time and cost.In addition, the paper focuses on the standardization of information extraction methods for the TDD as well as the prioritization and evaluation of building-related data. The automated evaluation supports value-adding decisions in the real estate lifecycle with a detailed database. TDD audits are a key objective for reducing information asymmetries, especially in large transactions.Efficient technologies are now available for IE from digital building data. Through machine learning, documents can be read and evaluated automatically. Digital data rooms and operational applications such as ERP systems serve as a source of information for information extraction. Due to the heterogeneity of the documents, rule and learning-based algorithms are used. The IE is based on various technical bases, especially in the field of neural networks and deep learning methods. As the documents are often only available as scans, it requires the integration of OCR methods.The contribution to the ERES-PhD session presents the current state of information extraction in the real estate industry, the research method used for the automation of TDD and its potential benefits for real estate management.
    Keywords: Artificial Intelligence; Automation; digital building data; Information Extraction; technical due diligence
    JEL: R3
    Date: 2018–01–01
    URL: http://d.repec.org/n?u=RePEc:arz:wpaper:eres2018_313&r=cmp
  9. By: Tavakoli, Amir; Mirzaei, Farzad; Tashakori, Sajad
    Abstract: This paper proposes a new optimization framework for the optimal power dispatch in both grid-connected and islanded microgrid modes. Solving the microgrid operation by the evolutionary algorithms can be faster than analytical models due to the complexity of the problem. To demonstrate the efficiency and high performance of the proposed technique, it is applied on the IEEE 33 bus test network. Also, the proposed technique is compared with the analytical model, and also well-known heuristic methods such as particle swarm optimization (PSO), genetic algorithm (GA).
    Keywords: Genetic Algorithm; Swarm Optimization; Microgrid
    JEL: C3 C6 C61 L0 L00
    Date: 2018–09–26
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:89203&r=cmp
  10. By: Eduardo A. Haddad (University of Sao Paulo); Luis A. Galvis (Banco de la República de Colombia); Inácio F. Araújo-Junior (University of Juiz de Fora); Vinicius A.Vale (Federal University of Parana)
    Abstract: En este trabajo se evalúan los efectos económicos de diferentes escenarios de asignación regional empleados en el esquema actual de transferencias interregionales en Colombia, destacando los posibles compromisos entre la eficiencia y la equidad regional. Las simulaciones realizadas en el trabajo, utilizando un modelo de equilibrio general computable interregional, contribuyen al análisis del impacto del crecimiento relacionado con algunos de los objetivos generales que persiguen los gobiernos centrales al asignar transferencias subnacionales a los gobiernos locales. En este sentido, se simulan escenarios contra factuales en los que las políticas redistributivas están diseñadas para evaluar los posibles resultados del Producto Bruto Regional. Los resultados muestran que cuando la distribución se lleva a cabo sobre la base del tamaño de la población regional, hay ganancias potenciales en el crecimiento nacional junto con un aumento en las disparidades regionales. Sin embargo, cuando la distribución se lleva a cabo de acuerdo con otros criterios redistributivos, como el número de personas en condición de pobreza o las brechas horizontales de equidad fiscal, existen mejoras potenciales en la desigualdad regional, a pesar de estar acompañadas de efectos negativos del crecimiento. En este sentido, si se prioriza el criterio redistributivo para compensar la reducción del crecimiento, las regiones que enfrentan un aumento neto en las transferencias deben asignar los recursos adicionales para mejorar en términos de Productividad Total de los Factores (PTF), específicamente, priorizando en inversiones que mejoran la PTF en el largo plazo, tales como aquellas en capital humano enfocadas a la educación y la salud. **** ABSTRACT: We assess the economic effects of different scenarios of regional allocation of the current interregional transfers’ scheme in Colombia, highlighting potential tradeoffs between regional equity and efficiency. The simulations conducted in this work, using an interregional computable general equilibrium model, contribute to the analysis of the growth impact related to some of the broad objectives that central governments pursue when allocating subnational transfers to local governments. We simulate counterfactual scenarios in which redistributive policies are designed to assess potential Gross Regional Product (GRP) outcomes had they been applied to the Colombian economy. The results show that when the distribution is carried out based on regional population shares, there are potential gains in national growth together with an increase in regional disparities. However, when the distribution is carried out according to other redistributive criteria, such as the number of people impoverished or the horizontal equity gaps, there is a potential improvement in regional inequality despite negative growth effects. In this sense, if we prioritize the redistributive criterion in order to offset the reduction of growth, regions that face a net increase in transfers should allocate the additional resources to improve in terms of Total Factor Productivity (TFP), specifically, in long-term TFP-enhancing investments, such as human capital in the form of education and health outcomes.
    Keywords: Decentralization, regional inequalities, subnational transfers, descentralización, desigualdades regionales, transferencias subnacionales.
    JEL: H77 R12 R13 D58 O54
    Date: 2018–10
    URL: http://d.repec.org/n?u=RePEc:bdr:region:272&r=cmp
  11. By: Adam Richardson; Thomas van Florenstein Mulder; Tugrul Vehbi
    Abstract: This paper analyses the real-time nowcasting performance of machine learning algorithms estimated on New Zealand data. Using a large set of real-time quarterly macroeconomic indicators, we train a range of popular machine learning algorithms and nowcast real GDP growth for each quarter over the 2009Q1-2018Q1 period. We compare the predictive accuracy of these nowcasts with that of other traditional univariate and multivariate statistical models. We find that the machine learning algorithms outperform the traditional statistical models. Moreover, combining the individual machine learning nowcasts further improves the performance than in the case of the individual nowcasts alone.
    Keywords: Nowcasting, Machine learning, Forecast evaluation
    JEL: C52 C53
    Date: 2018–09
    URL: http://d.repec.org/n?u=RePEc:een:camaaa:2018-47&r=cmp
  12. By: Michal Franta; Tomas Holub; Branislav Saxa
    Abstract: We present projections of the Czech National Bank's balance sheet after the discontinuation of the exchange rate commitment. Our model addresses the situation of a large central bank balance sheet with assets consisting almost exclusively of foreign exchange reserves in the circumstances of a catching-up economy exhibiting an exchange rate appreciation trend. Apart from the baseline projection, several counter-factual scenarios are discussed. The scenarios concern the evolution of the balance sheet in the cases of no exchange rate commitment and a commitment with earlier discontinuation. The simulated counter-factual duration of negative CNB equity, and thus the period of no profit distribution to the government, does not differ substantially from the baseline. The fiscal implications of the exchange rate commitment are thus estimated to be relatively small and related only to the period after the year 2030. Our stochastic simulations, however, show that the uncertainty bands are very wide. In addition, we show that the simulation tool can be employed to discuss the consequences of a long-run decline in currency in circulation, the composition of the asset side and the resumption of foreign exchange income sales by the central bank.
    Keywords: Central bank balance sheet, deterministic simulations, stochastic simulations
    JEL: E47 E52 E58
    Date: 2018–09
    URL: http://d.repec.org/n?u=RePEc:cnb:wpaper:2018/10&r=cmp
  13. By: Nermin Elif Kurt; H. Bahadir Sahin; K\"ur\c{s}ad Derinkuyu
    Abstract: In this study, we focus on the market clearing problem of Turkish day-ahead electricity market. We propose a mathematical model by extending the variety of bid types for different price regions. The commercial solvers may not find any feasible solution for the proposed problem in some instances within the given time limits. Hence, we design an adaptive tabu search (ATS) algorithm to solve the problem. ATS discretizes continuous search space arising from the flow variables. Our method has adaptive radius and it achieves backtracking by a commercial solver. Then, we compare the performance of ATS with a heuristic decomposition method from the literature by using synthetic data sets. We evaluate the performances of the algorithms with respect to their solution times and surplus differences. ATS performs better in most of the sets.
    Date: 2018–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1809.10554&r=cmp

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.