|
on Computational Economics |
Issue of 2011‒10‒09
nine papers chosen by |
By: | Thomas Grubinger; Achim Zeileis; Karl-Peter Pfeiffer |
Abstract: | Commonly used classification and regression tree methods like the CART algorithm are recursive partitioning methods that build the model in a forward stepwise search. Although this approach is known to be an efficient heuristic, the results of recursive tree methods are only locally optimal, as splits are chosen to maximize homogeneity at the next step only. An alternative way to search over the parameter space of trees is to use global optimization methods like evolutionary algorithms. This paper describes the "evtree" package, which implements an evolutionary algorithm for learning globally optimal classification and regression trees in R. Computationally intensive tasks are fully computed in C++ while the "partykit" (Hothorn and Zeileis 2011) package is leveraged for representing the resulting trees in R, providing unified infrastructure for summaries, visualizations, and predictions. "evtree" is compared to "rpart" (Therneau and Atkinson 1997), the open-source CART implementation, and conditional inference trees ("ctree", Hothorn, Hornik, and Zeileis 2006). The usefulness of "evtree" is illustrated in a textbook customer classification task and a benchmark study of predictive accuracy in which "evtree" achieved at least similar and most of the time better results compared to the recursive algorithms "rpart" and "ctree". |
Keywords: | machine learning, classification trees, regression trees, evolutionary algorithms, R |
JEL: | C14 C45 C87 |
Date: | 2011–09 |
URL: | http://d.repec.org/n?u=RePEc:inn:wpaper:2011-20&r=cmp |
By: | Eric BROUILLAT (GREThA, CNRS, UMR 5113) |
Abstract: | This paper presents an agent-based simulation model that explores the dynamics of product lifetimes on a competitive market. The main objective of this modelling exercise is to investigate the conditions under which product-life extension strategies can be effective. In this model, change in products’ characteristics is driven by an endogenous stochastic process relying on the interplays between heterogeneous consumers and firms. The main contribution of the paper is to present a detailed modeling of demand which enables to analyze more thoroughly how decisions of bounded rational consumers impact on the dynamics of the system and, more particularly, how purchase process shapes market selection and strategies of firms. While most existing literature on product lifetime investigates durable goods monopolists, our study highlights that competition and diversity matter. The coexistence of competing products with different lifetimes can encourage firms to market long lifetime products. Our results also stress the critical role played in market dynamics by the processes driving purchase decision. The purchasing behavior of consumers in itself will greatly guide firms’ strategies and in fine shape market structure. |
Keywords: | industrial dynamics; obsolescence; product durability; product lifetimes; simulation model; sustainable consumption |
JEL: | O33 D11 D21 Q57 |
Date: | 2011 |
URL: | http://d.repec.org/n?u=RePEc:grt:wpegrt:2011-31&r=cmp |
By: | Bédia F. Aka; Souleymane S. Diallo |
Abstract: | The objective of this paper is to examine how a small open economy such as Côte d’Ivoire (CI) can obtain growth-based internal tax resources, and how the tax system affects households and individuals through relative prices. A microsimulated CGE model is used to analyse the effects of an alternative tax system on households by utilizing a survey. It is postulated that the military and political crisis that started in 1999 with the first coup d’etat in Côte d’Ivoire is transitory and that CI has an internal tax policy capacity. This paper indicates that an alternative tax structure can reduce distortion in regional poverty, inequality for households, and in cities and small areas of the country. A model is formulated using Côte d’Ivoire’s 1998-based social accounting matrix and the 1998 population survey of 4,200 households. The main findings of this study are that the post-crisis tax policies envisioned by the government (reducing the tax rate on firms, reducing import taxes and increasing taxes on household income) result in an increase in poverty and inequality at the regional, city and small area levels. |
Date: | 2011–01 |
URL: | http://d.repec.org/n?u=RePEc:aer:rpaper:rp_218&r=cmp |
By: | Erhan Bayraktar; Arash Fahim |
Abstract: | We present a stochastic numerical method for solving fully non-linear free boundary problems of parabolic type and provide a rate of convergence under reasonable conditions on the non-linearity. |
Date: | 2011–09 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1109.5752&r=cmp |
By: | Lance Taylor (New School for Social Research, New York, NY) |
Abstract: | This paper begins with an informal history of developing country CGE models, going on to specification and closure, and finally describes a few models with financial extensions. Sectoral detail is central to CGE analysis, but after an initial sketch of an n-sector system most of the discussion focuses on the models' "closures" or patterns of macroeconomic causality, because they strongly influence their sectoral results. Particular attention is paid to the ways in which international trade and financial flows are fitted into applied models. |
Keywords: | development, developing countries, CGE models, international trade, financial flows, sector |
Date: | 2011–01 |
URL: | http://d.repec.org/n?u=RePEc:epa:cepawp:2011-1&r=cmp |
By: | Irene Petersen (University College London); Catherine Welch; Jonathan Bartlett; Ian White; Richard Morris; Louise Marston; Kate Walters; Irwin Nazareth; James Carpenter |
Abstract: | Multiple imputation is increasingly regarded as the standard method to account for partially observed data, but most methods have been based on cross-sectional imputation algorithms. Recently, a new multiple-imputation method, the two fold fully conditional specification (FCS) method, was developed to impute missing data in longitudinal datasets with nonmonotone missing data. (See Nevalainen J., Kenward M.G., and Virtanen S.M. 2009. Missing values in longitudinal dietary data: A multiple imputation approach based on a fully conditional specification. Statistics in Medicine 28: 3657-3669.) This method imputes missing data at a given time point based on measurements recorded at the previous and next time points. Up to now, the method has only been tested on a relatively small dataset and under very specific conditions. We have implemented the two fold FCS algorithm in Stata, and in this study we further challenge and evaluate the performance of the algorithm under different scenarios. In simulation studies, we generated 1,000 datasets, which were similar in structure to the longitudinal clinical records (The Health Improvement Network primary care database) to which we will apply the two fold FCS algorithm. Initially, these generated datasets included complete records. We then introduced different levels and patterns of partially observed data patterns and applied the algorithm to generate multiply imputed datasets. The results of our initial multiple imputations demonstrated that the algorithm provided acceptable results when using a linear substantive model and data were imputed over a limited time period for continuous variables such as weight and blood pressure. Introducing an exponential substantive model introduced some bias, but estimates were still within acceptable ranges. We will present results for simulation studies that include situations where categorical and continuous variables change over a 10-year period (for example, smokers become ex-smokers, weight increases or decreases) and large proportions of data are unobserved. We also explore how the algorithm deals with interactions and whether it has any impact on the final data distribution--whether the algorithm is initiated to run forward or backward in time. |
Date: | 2011–09–26 |
URL: | http://d.repec.org/n?u=RePEc:boc:usug11:11&r=cmp |
By: | Marco Cozzi (Queen's University) |
Abstract: | This paper discusses a series of Monte Carlo experiments designed to evaluate the empirical properties of heterogeneous-agent macroeconomic models in the presence of sampling variability. The calibration procedure leads to the welfare analysis being conducted with the wrong parameters. The ability of the calibrated model to correctly predict the welfare changes induced by a set of policy experiments is assessed. The results show that, for the economy and the policy reforms under analysis, the model always predicts the right sign of the welfare effects. Quantitatively, the maximum errors made in evaluating a policy change are very small for some reforms (in the order of 0.05 percentage points), but bigger for others (in the order of 0.5 pp). Finally, having access to better data, in terms of larger samples, does lead to sizable increases in the precision of the estimated welfare effects. |
Keywords: | Monte Carlo, Heterogeneous Agents, Incomplete Markets, Ex-ante Policy Evaluation, Welfare |
JEL: | C15 C68 D52 |
Date: | 2011–09 |
URL: | http://d.repec.org/n?u=RePEc:qed:wpaper:1277&r=cmp |
By: | Arnold Zellner ((posthumous) Booth School of Business, University of Chicago, USA); Tomohiro Ando (Graduate School of Business Administration, Keio University, Japan); Nalan Basturk (Econometric Institute, Erasmus University Rotterdam, The Netherlands; The Rimini Centre for Economic Analysis, Rimini, Italy); Lennart Hoogerheide (VU University Amsterdam, The Netherlands); Herman K. van Dijk (Econometric Institute, Erasmus University Rotterdam, and VU University Amsterdam) |
Abstract: | A Direct Monte Carlo (DMC) approach is introduced for posterior simulation in the Instrumental Variables (IV) model with one possibly endogenous regressor, multiple instruments and Gaussian errors under a flat prior. This DMC method can also be applied in an IV model (with one or multiple instruments) under an informative prior for the endogenous regressor's effect. This DMC approach can not be applied to more complex IV models or Simultaneous Equations Models with multiple endogenous regressors. An Approximate DMC (ADMC) approach is introduced that makes use of the proposed Hybrid Mixture Sampling (HMS) method, which facilitates Metropolis-Hastings (MH) or Importance Sampling from a proper marginal posterior density with highly non-elliptical shapes that tend to infinity for a point of singularity. After one has simulated from the irregularly shaped marginal distri- bution using the HMS method, one easily samples the other parameters from their conditional Student-t and Inverse-Wishart posteriors. An example illustrates the close approximation and high MH acceptance rate. While using a simple candidate distribution such as the Student-t may lead to an in¯nite variance of Importance Sampling weights. The choice between the IV model and a simple linear model un- der the restriction of exogeneity may be based on predictive likelihoods, for which the efficient simulation of all model parameters may be quite useful. In future work the ADMC approach may be extended to more extensive IV models such as IV with non-Gaussian errors, panel IV, or probit/logit IV. |
Keywords: | Instrumental Variables; Errors in Variables; Simultaneous Equations Model; Bayesian estimation; Direct Monte Carlo; Hybrid Mixture Sampling |
Date: | 2011–09–27 |
URL: | http://d.repec.org/n?u=RePEc:dgr:uvatin:20110137&r=cmp |
By: | Silvia Bortot; Mario Fedrizzi; Silvio Giove |
Abstract: | Modelling an attack tree is basically a matter of associating a logical "and" and a logical "or" but in most of real world applications related to fraud management the "and/or"logic is not adequate to effectively represent the relationship between a parent node and its children, most of all when information about attributes is associated to the nodes and the main problem to solve is how to promulgate attribute values up the tree through recursive aggregation operations occurring at the "and/or"nodes. OWA-based aggregations have been introduced to generalize "and" and "or" operators starting from the observation that in between the extremes "or all"(and) and "or any"(or), terms (quantifiers) like "several" "most" "few" "some" etc. can be introduced to represent the different weights associated to the nodes in the aggregation. The aggregation process taking place at an OWA node depends on the ordered position of the child nodes but it doesnÕ take care of the possible interactions between the nodes. In this paper, we propose to overcome this drawback introducing the Choquet integral whose distinguished feature is to be able to take into account the interaction between nodes. At first, the attack tree is valuated recursively through a bottom-up algorithm whose complexity is linear versus the number of nodes and exponential for every node. Then, the algorithm is extended assuming that the attribute values in the leaves are unimodal LR fuzzy numbers and the calculation of Choquet integral is carried out using the alpha-cuts. |
Keywords: | Fraud detection; attack tree; ordered weighted averaging (OWA) operator; Choquet integral; fuzzy numbers. |
Date: | 2011–08 |
URL: | http://d.repec.org/n?u=RePEc:trt:disawp:2011/9&r=cmp |