|
on Computational Economics |
Issue of 2019‒09‒30
twelve papers chosen by |
By: | Miroslav Despotovic; David Koch; Sascha Leiber |
Abstract: | The value of a property is influenced by a number of factors such as location, year of construction, area used, etc. In particular, the classification of the condition of a building plays an important role in this context, since each real estate actor (expert, broker, etc.) perceives the condition individually. This paper investigates automatic extraction of condition-specific visual characteristics from buildings using indoor and outdoor images as well as automatic classification of condition classes. This is a complex task because an object of interest can appear at different positions within the image. In addition, an object of interest and/or the building can be captured from different distances and perspectives and under different weather and lighting conditions. Furthermore, the classification method applied with the convolutional neural network, as described in this paper, requires a large amount of input data. The forecast results of the neural network are promising and show accuracy rates between 67 and 81% using various set-up constellations. The described method has a high development potential in the scientific as well as in the practical sense. The results are technically innovative and should, apart from research relevant contribution, make a practical contribution to future automation-supported real estate valuation procedures. The primary aim of this work is to stimulate the development of new scientifically relevant methods and questions in this direction. |
Keywords: | Hedonic Pricing; image analyses; Neural Networks |
JEL: | R3 |
Date: | 2019–01–01 |
URL: | http://d.repec.org/n?u=RePEc:arz:wpaper:eres2019_284&r=all |
By: | Angelos Filos |
Abstract: | In this thesis, we develop a comprehensive account of the expressive power, modelling efficiency, and performance advantages of so-called trading agents (i.e., Deep Soft Recurrent Q-Network (DSRQN) and Mixture of Score Machines (MSM)), based on both traditional system identification (model-based approach) as well as on context-independent agents (model-free approach). The analysis provides conclusive support for the ability of model-free reinforcement learning methods to act as universal trading agents, which are not only capable of reducing the computational and memory complexity (owing to their linear scaling with the size of the universe), but also serve as generalizing strategies across assets and markets, regardless of the trading universe on which they have been trained. The relatively low volume of daily returns in financial market data is addressed via data augmentation (a generative approach) and a choice of pre-training strategies, both of which are validated against current state-of-the-art models. For rigour, a risk-sensitive framework which includes transaction costs is considered, and its performance advantages are demonstrated in a variety of scenarios, from synthetic time-series (sinusoidal, sawtooth and chirp waves), simulated market series (surrogate data based), through to real market data (S\&P 500 and EURO STOXX 50). The analysis and simulations confirm the superiority of universal model-free reinforcement learning agents over current portfolio management model in asset allocation strategies, with the achieved performance advantage of as much as 9.2\% in annualized cumulative returns and 13.4\% in annualized Sharpe Ratio. |
Date: | 2019–09 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1909.09571&r=all |
By: | Shenhao Wang; Jinhua Zhao |
Abstract: | Whereas deep neural network (DNN) is increasingly applied to choice analysis, it is challenging to reconcile domain-specific behavioral knowledge with generic-purpose DNN, to improve DNN's interpretability and predictive power, and to identify effective regularization methods for specific tasks. This study designs a particular DNN architecture with alternative-specific utility functions (ASU-DNN) by using prior behavioral knowledge. Unlike a fully connected DNN (F-DNN), which computes the utility value of an alternative k by using the attributes of all the alternatives, ASU-DNN computes it by using only k's own attributes. Theoretically, ASU-DNN can dramatically reduce the estimation error of F-DNN because of its lighter architecture and sparser connectivity. Empirically, ASU-DNN has 2-3% higher prediction accuracy than F-DNN over the whole hyperparameter space in a private dataset that we collected in Singapore and a public dataset in R mlogit package. The alternative-specific connectivity constraint, as a domain-knowledge-based regularization method, is more effective than the most popular generic-purpose explicit and implicit regularization methods and architectural hyperparameters. ASU-DNN is also more interpretable because it provides a more regular substitution pattern of travel mode choices than F-DNN does. The comparison between ASU-DNN and F-DNN can also aid in testing the behavioral knowledge. Our results reveal that individuals are more likely to compute utility by using an alternative's own attributes, supporting the long-standing practice in choice modeling. Overall, this study demonstrates that prior behavioral knowledge could be used to guide the architecture design of DNN, to function as an effective domain-knowledge-based regularization method, and to improve both the interpretability and predictive power of DNN in choice analysis. |
Date: | 2019–09 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1909.07481&r=all |
By: | Jamaledini, Ashkan; Soltani, Ali; Khazaei, Ehsan |
Abstract: | This paper presents a new heuristic algorithm famous as charged system search algorithm for optimal operation of the microgrids (MGs) in the grid-connected mode. The algorithm is also being modified based on a mutation operator technique to speed up its convergence speed. Finally, the model is examined on IEEE 69 bus test system. |
Keywords: | Power system economics, power system market, power system operation and energy management, industrial economics |
JEL: | A1 C0 H0 L0 L6 P0 |
Date: | 2019 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:95896&r=all |
By: | Florian Bourgey (CMAP - Centre de Mathématiques Appliquées - Ecole Polytechnique - X - École polytechnique - CNRS - Centre National de la Recherche Scientifique); Emmanuel Gobet (CMAP - Centre de Mathématiques Appliquées - Ecole Polytechnique - X - École polytechnique - CNRS - Centre National de la Recherche Scientifique); Clément Rey (CMAP - Centre de Mathématiques Appliquées - Ecole Polytechnique - X - École polytechnique - CNRS - Centre National de la Recherche Scientifique) |
Abstract: | We design a meta-model for the loss distribution of a large credit portfolio in the Gaussian copula model. Using both the Wiener chaos expansion on the systemic economic factor and a Gaussian approximation on the associated truncated loss, we significantly reduce the computational time needed for sampling the loss and therefore estimating risk measures on the loss distribution. The accuracy of our method is confirmed by many numerical examples. |
Keywords: | Monte Carlo simulation,portfolio credit risk,polynomial chaos expansion,meta-model |
Date: | 2019–09–19 |
URL: | http://d.repec.org/n?u=RePEc:hal:wpaper:hal-02291548&r=all |
By: | Daan Kolkman (Technical University Eindhoven); Arjen van Witteloostuijn (Vrije Universiteit Amsterdam) |
Abstract: | This study examines the applicability of modern Data Science techniques in the domain of Strategy. We apply novel techniques from the field of machine learning and text analysis. WE proceed in two steps. First, we compare different machine learning techniques to traditional regression methods in terms of their goodness-of-fit, using a dataset with 168,055 firms, only including basic demographic and financial information. The novel methods fare to three to four times better, with the random forest technique achieving the best goodness-of-fit. Second, based on 8,163 informative websites of Dutch SMEs, we construct four additional proxies for personality and strategy variables. Including our four text-analyzed variables adds about 2.5 per cent to the R2. Together, our pair of contributions provide evidence for the large potential of applying modern Data Science techniques in Strategy research. We reflect on the potential contribution of modern Data Science techniques from the perspective of the common critique that machine learning offers increased predictive accuracy at the expense of explanatory insight. Particularly, we will argue and illustrate why and how machine learning can be a productive element in the abductive theory-building cycle. |
JEL: | L1 |
Date: | 2019–09–20 |
URL: | http://d.repec.org/n?u=RePEc:tin:wpaper:20190066&r=all |
By: | Jialin Liu; Chih-Min Lin; Fei Chao |
Abstract: | Market economy closely connects aspects to all walks of life. The stock forecast is one of task among studies on the market economy. However, information on markets economy contains a lot of noise and uncertainties, which lead economy forecasting to become a challenging task. Ensemble learning and deep learning are the most methods to solve the stock forecast task. In this paper, we present a model combining the advantages of two methods to forecast the change of stock price. The proposed method combines CNN and GBoost. The experimental results on six market indexes show that the proposed method has better performance against current popular methods. |
Date: | 2019–09 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1909.09563&r=all |
By: | Khazaei, Ehsan; Jamaledini, Ashkan; Soltani, Ali |
Abstract: | Islanded microgrid (MG) is a microgrid which is disconnected from the main network and therefore it is safe and has lead to one of the most important challenges in the power system operation. In the case of higher price in the market, the islanded MG offers a lower operational cost. On the other hand, the nonlinear attribute of optimal operation of the islanded MG has made difficulties in using such systems. To overcome this obstacle, a new heuristic method known as the Gravitational Emulation Local Search Algorithm (GELSA) is proposed in this paper in order to solve the problem. The efficiency of the technique is examined on a modified IEEE 30 bus test network. |
Keywords: | Economic operation, power market, power economic, microgrid |
JEL: | A1 C0 H0 L0 L6 O1 |
Date: | 2019 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:95895&r=all |
By: | S\'ergio Bacelar; Luis Antunes |
Abstract: | The increasing difficulties in financing the welfare state and in particular public retirement pensions have been one of the outcomes both of the decrease of fertility and birth rates combined with the increase of life expectancy. The dynamics of retirement pensions are usually studied in Economics using overlapping generation models. These models are based on simplifying assumptions like the use of a representative agent to ease the problem of tractability. Alternatively, we propose to use agent-based modelling (ABM), relaxing the need for those assumptions and enabling the use of interacting and heterogeneous agents assigning special importance to the study of inter-generational relations. We treat pension dynamics both in economics and political perspectives. The model we build, following the ODD protocol, will try to understand the dynamics of choice of public versus private retirement pensions resulting from the conflicting preferences of different agents but also from the cooperation between them. The aggregation of these individual preferences is done by voting. We combine a microsimulation approach following the evolution of synthetic populations along time, with the ABM approach studying the interactions between the different agent types. Our objective is to depict the conditions for the survival of the public pensions system emerging from the relation between egoistic and altruistic individual and collective behaviours. |
Date: | 2019–09 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1909.08706&r=all |
By: | Tim Morris (MRC Clinical Trials Unit, University College London); Michael Crowther (Biostatistics Research Group, Department of Health Sciences, University of Leicester) |
Abstract: | There are two broad approaches to coding a simulation study in Stata. The first is to write an rclass program that simulates and analyzes data before using the simulate command to repeat the process and store summaries of results. The second is to loop through repetitions and use the postfile family to store results. One favors the simulate approach because the code is much cleaner, so it is easier to spot mistakes. The other favors the postfile approach because it delivers a superior dataset summarizing simulation results. Both are good reasons. During yet another argument, we spotted a third approach that is unambiguously right because it uses cleanly structured code and delivers a useful dataset. This presentation will describe the issues with the simulate and postfile approaches before showing the correct approach. Simulation studies are an important element of statistical research, but they can be derailed, sometimes badly, by coding errors. The approach that gives both clean code and a usable dataset is worthwhile for all but the simplest simulation studies. |
Date: | 2019–09–15 |
URL: | http://d.repec.org/n?u=RePEc:boc:usug19:07&r=all |
By: | Annette Ficker; Frits Spieksma; G. Woeginger |
Abstract: | An instance of a balanced optimization problem with vector costs consists of a ground set X, a cost vector for every element of X, and a system of feasible subsets over X. The goal is to find a feasible subset that minimizes the so-called imbalance of values in every coordinate of the underlying vector costs. Balanced optimization problems with vector costs are equivalent to the robust optimization version of balanced optimization problems under the min-max criterion. We regard these problems as a family of optimization problems; one particular member of this family is the known balanced assignment problem. We investigate the complexity and approximability of robust balanced optimization problems in a fairly general setting. We identify a large family of problems that admit a 2-approximation in polynomial time, and we show that for many problems in this family this approximation factor 2 is best-possible (unless P=NP). We pay special attention to the balanced assignment problem with vector costs and show that this problem is NP-hard even in the highly restricted case of sum costs. We conclude by performing computational experiments for this problem. |
Keywords: | Balanced optimization, Assignment problem, Computational complexity, Approximation |
Date: | 2018–01 |
URL: | http://d.repec.org/n?u=RePEc:ete:kbiper:611921&r=all |
By: | Sruthi Davuluri; René García Francheschini; Christopher R. Knittel; Chikara Onda; Kelly Roache |
Abstract: | The solar industry in the US typically uses a credit score such as the FICO score as an indicator of consumer utility payment performance and credit worthiness to approve customers for new solar installations. Using data on over 800,000 utility payment performance and over 5,000 demographic variables, we compare machine learning and econometric models to predict the probability of default to credit-score cutoffs. We compare these models across a variety of measures, including how they affect consumers of different socio-economic backgrounds and profitability. We find that a traditional regression analysis using a small number of variables specific to utility repayment performance greatly increases accuracy and LMI inclusivity relative to FICO score, and that using machine learning techniques further enhances model performance. Relative to FICO, the machine learning model increases the number of low-to-moderate income consumers approved for community solar by 1.1% to 4.2% depending on the stringency used for evaluating potential customers, while decreasing the default rate by 1.4 to 1.9 percentage points. Using electricity utility repayment as a proxy for solar installation repayment, shifting from a FICO score cutoff to the machine learning model increases profits by 34% to 1882% depending on the stringency used for evaluating potential customers. |
JEL: | C53 L11 L94 Q2 |
Date: | 2019–09 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:26178&r=all |