|
on Computational Economics |
Issue of 2017‒12‒18
ten papers chosen by |
By: | Kleijnen, J.P.C. (Tilburg University, Center For Economic Research) |
Abstract: | In this chapter we present Kriging— also known as a Gaussian process (GP) model— which is a mathematical interpolation method. To select the input combinations to be simulated, we use Latin hypercube sampling (LHS); we allow uniform and non-uniform distributions of the simulation inputs. Besides deterministic simulation we discuss random simulation, which requires adjusting the design and analysis. We discuss sensitivity analysis of simulation models, using "functional analysis of variance" (FANOVA)— also known as Sobol sensitivity indexes. Finally, we discuss optimization of the simulated system, including "robust" optimization. |
Keywords: | Gaussian process; Latin hypercube; deterministic simulation; random simulation; sensitivity analysis; optimization |
JEL: | C0 C1 C9 C15 C44 |
Date: | 2017 |
URL: | http://d.repec.org/n?u=RePEc:tiu:tiucen:0e31d8b9-596f-4bbc-a248-00b22c757ba9&r=cmp |
By: | Dominique Guegan (CES - Centre d'économie de la Sorbonne - UP1 - Université Panthéon-Sorbonne - CNRS - Centre National de la Recherche Scientifique, Labex ReFi - Université Paris1 - Panthéon-Sorbonne); Bertrand Hassani (CES - Centre d'économie de la Sorbonne - UP1 - Université Panthéon-Sorbonne - CNRS - Centre National de la Recherche Scientifique, Labex ReFi - Université Paris1 - Panthéon-Sorbonne) |
Abstract: | The arrival of big data strategies is threatening the lastest trends in financial regulation related to the simplification of models and the enhancement of the comparability of approaches chosen by financial institutions. Indeed, the intrinsic dynamic philosophy of Big Data strategies is almost incompatible with the current legal and regulatory framework as illustrated in this paper. Besides, as presented in our application to credit scoring, the model selection may also evolve dynamically forcing both practitioners and regulators to develop libraries of models, strategies allowing to switch from one to the other as well as supervising approaches allowing financial institutions to innovate in a risk mitigated environment. The purpose of this paper is therefore to analyse the issues related to the Big Data environment and in particular to machine learning models highlighting the issues present in the current framework confronting the data flows, the model selection process and the necessity to generate appropriate outcomes. |
Keywords: | Regulation,AUC,Machine Learning,Big Data,Credit Scoring |
Date: | 2017–09 |
URL: | http://d.repec.org/n?u=RePEc:hal:cesptp:halshs-01592168&r=cmp |
By: | Deinhammer, Harald; Ladi, Anna |
Abstract: | The quality of banknotes in the cash cycles of countries in the Eurosystem varies, despite all of these countries using identical euro banknotes. While it is known that this is dependent on national characteristics, such as public use and the involvement of the central bank in cash processing operations, the influence of all relevant parameters has not yet been established. This paper presents two computer-based models for the simulation of banknote cash cycles. The first model simulates a cash cycle using a theoretical approach based on key figures and models banknote fitness as a one-dimensional profile of fitness levels. The model identifies: (i) the frequency with which banknotes are returned to the central bank; (ii) the fitness threshold used in automated note processing at the central bank; and (iii) the note lifetime as the main drivers of banknote quality in circulation as well as central bank cash cycle costs. Production variations in new banknotes, the fitness threshold applied by commercial cash handlers and the accuracy of the fitness sensors used in the sorting process have been found to have a lower but non-trivial impact. The second model simulates banknotes in circulation as single entities and is oriented towards modelling country-specific cash cycles using available single-note data. The model is constructed using data collected by monitoring banknotes in circulation over the duration of a “circulation trial” carried out in three euro area countries. We compare the predicted quality results of the second data-based model against actual cash cycle data collected outside the circulation trial, discuss the reasons for the deviations found and conclude with considerations for an optimal theoretical national cash cycle. JEL Classification: C46, C63, E42, E58 |
Keywords: | banknote circulation, banknote lifetime, banknote quality, banknotes, circulation modelling |
Date: | 2017–12 |
URL: | http://d.repec.org/n?u=RePEc:ecb:ecbops:2017204&r=cmp |
By: | Tomasello, Mario Vincenzo; Burkholz, Rebekka; Schweitzer, Frank |
Abstract: | The authors develop an agent-based model to reproduce the size distribution of R&D alliances of firms. Agents are uniformly selected to initiate an alliance and to invite collaboration partners. These decide about acceptance based on an individual threshold that is compared with the utility expected from joining the current alliance. The benefit of alliances results from the fitness of the agents involved. Fitness is obtained from an empirical distribution of agent's activities. The cost of an alliance reflects its coordination effort. Two free parameters ac and a1 scale the costs and the individual threshold. If initiators receive R rejections of invitations, the alliance formation stops and another initiator is selected. The three free parameters (ac; a1; R) are calibrated against a large scale data set of about 15,000 firms engaging in about 15,000 R&D alliances over 26 years. For the validation of the model the authors compare the empirical size distribution with the theoretical one, using confidence bands, to find a very good agreement. As an asset of our agent-based model, they provide an analytical solution that allows to reduce the simulation effort considerably. The analytical solution applies to general forms of the utility of alliances. Hence, the model can be extended to other cases of alliance formation. While no information about the initiators of an alliance is available, the results indicate that mostly firms with high fitness are able to attract newcomers and to establish larger alliances. |
Keywords: | R&D network,alliance,collaboration,agent |
JEL: | L14 |
Date: | 2017 |
URL: | http://d.repec.org/n?u=RePEc:zbw:ifwedp:2017107&r=cmp |
By: | Giovanni Dosi; Andrea Roventini; Emanuele Russo |
Abstract: | In this paper we present a multi-country, multi-industry agent-based model investigating the different growth patterns of interdependent economies. Each country features a Schumpeterian engine of endogenous technical change which interacts with Keyneasian/Kaldorian demand generation mechanisms. National growth trajectories are driven by firms' accumulation of technological knowledge, which in turn also leads to emergent specialization patterns in different industries. Interactions among economies occur via trade flows, stemming from the competition of firms in international markets. Simulation results show the emergence of persistent income divergence among countries leading to polarization and club formation. Moreover, each country experiences a structural transformation of its productive structure during the development process. Such dynamics results from firm-level virtuous (or vicious) cycles between knowledge accumulation, trade performances, and growth dynamics. The model accounts for a rich ensemble of empirical regularities at macro, meso and micro levels of aggregation. |
Keywords: | Endogenous growth, structural change, technology-gaps, global divergence, absolute, advantages, agent-based models. |
Date: | 2017–12–11 |
URL: | http://d.repec.org/n?u=RePEc:ssa:lemwps:2017/32&r=cmp |
By: | Dario Sansone (Department of Economics, Georgetown University) |
Abstract: | This paper provides an algorithm to predict which students are going to drop out of high schools relying only on information from 9th grade. It verifies that using a parsimonious early warning system - as implemented in many schools - leads to poor results. It shows that schools can obtain more precise predictions by exploiting the available high-dimensional data jointly with machine learning tools such as Support Vector Machine, Boosted Regression and Post-LASSO. It carefully selects goodness-of-fit criteria based on the context and the underlying theoretical framework: model parameters are calibrated by taking into account policy goals and budget constraints. Finally, it uses unsupervised machine learning to divide students at risk of dropping out into different clusters. |
Keywords: | High School Dropout, Machine Learning, Big Data |
JEL: | C53 I20 |
URL: | http://d.repec.org/n?u=RePEc:geo:guwopa:gueconwpa~17-17-09&r=cmp |
By: | ITO Kazuyori |
Abstract: | There are two current types of artificial intelligence (AI): big data-driven AI (BD-AI) which is currently at the height of its influence and neuromorphic AI (NM-AI) which is expected to be quite prosperous but still lacks practicality. The first objective of this paper is to consider, from the perspective of computer science, semiconductor integrated circuits, and neuroscience as well as economics, why we particularly need to pay attention now to the latter NM-AI which is expected to be the core of AI in the mid- and long-term. Moreover, based on such consideration, we try to clarify what is the intelligence embodied by NM-AI and BD-AI, and discuss the complementarity or substitutability between human capital (HC)/human intelligence (HI) and the future completed version of NM-AI beyond the current BD-AI. More concretely, we try to reclassify and subdivide, in a non-behavioristic way, the suitcase-like word of intelligence which is full of ambiguities. Furthermore, based on the discussion, we try to understand what kinds of (non-) inclusion relation exist between both types of intelligence by referring to "response capabilities to change-causing or unusual situations" and their self-evolvability. In doing this, we will especially take up the following three viewpoints: HP/HI as a social network, the role of emotion as the fast perspective-switching device to cope with change-causing or unusual situations, and the role of emotion as the community forming device to create a wide range of cooperation among people with common knowledge/cultures as well as their diverse mutual intentions. |
Date: | 2017–11 |
URL: | http://d.repec.org/n?u=RePEc:eti:rpdpjp:17031&r=cmp |
By: | Giovanni Dosi; Mauro Napoletano; Andrea Roventini; Joseph E. Stiglitz; Tania Treibich |
Abstract: | We analyze the individual and macroeconomic impacts of heterogeneous expectations and action rules within an agent-based model populated by heterogeneous, interacting firms. Agents have to cope with a complex evolving economy characterized by deep uncertainty resulting from technical change, imperfect information and coordination hurdles. In these circumstances, we find that neither individual nor macroeconomic dynamics improve when agents replace myopic expectations with less naie learning rules. In fact, more sophisticated, e.g. recursive least squares (RLS) expectations produce less accurate individual forecasts and also considerably worsen the performance of the economy. Finally, we experiment with agents that adjust simply to technological shocks, and we show that individual and aggregate performances dramatically degrade. Our results suggest that fast and frugal robust heuristics are not a second-best option: rather they are "rational" in macroeconomic environments with heterogeneous, interacting agents and changing "fundamentals". |
Keywords: | complexity, expectations, heterogeneity, heuristics, learning, agent-based model, computational economics |
Date: | 2017–12–07 |
URL: | http://d.repec.org/n?u=RePEc:ssa:lemwps:2017/31&r=cmp |
By: | Fusai, Gianluca; Germano, Guido; Marazzina, Daniele |
Abstract: | The Wiener-Hopf factorization of a complex function arises in a variety of fields in applied mathematics such as probability, finance, insurance, queuing theory, radio engineering and fluid mechanics. The factorization fully characterizes the distribution of functionals of a random walk or a Lévy process, such as the maximum, the minimum and hitting times. Here we propose a constructive procedure for the computation of the Wiener-Hopf factors, valid for both single and double barriers, based on the combined use of the Hilbert and the z-transform. The numerical implementation can be simply performed via the fast Fourier transform and the Euler summation. Given that the information in the Wiener-Hopf factors is strictly related to the distributions of the first passage times, as a concrete application in mathematical finance we consider the pricing of discretely monitored exotic options, such as lookback and barrier options, when the underlying price evolves according to an exponential Lévy process. We show that the computational cost of our procedure is independent of the number of monitoring dates and the error decays exponentially with the number of grid points. |
Keywords: | Path-dependent options; Hilbert transform; Lévy process; Spitzer identity; Wiener-Hopf factorization |
JEL: | J1 |
Date: | 2016–05–16 |
URL: | http://d.repec.org/n?u=RePEc:ehl:lserod:67564&r=cmp |
By: | Thorsten Simon; Peter Fabsic; Georg J. Mayr; Nikolaus Umlauf; Achim Zeileis |
Abstract: | A probabilistic forecasting method to predict thunderstorms in the European Eastern Alps is developed. A statistical model links lightning occurrence from the ground-based ALDIS detection network to a large set of direct and derived variables from a numerical weather prediction (NWP) system. The NWP system is the high resolution run (HRES) of the European Centre for Medium-Range Weather Forecasts (ECMWF). The statistical model is a generalized additive model (GAM) framework, which is estimated by Markov chain Monte Carlo (MCMC) simulation. Gradient boosting with stability selection serves as a tool for selecting a stable set of potentially nonlinear terms. Three grids from 64×64 km 2 to 16×16 km 2 and 5 forecasts horizons from 5 to 1 day ahead are investigated to predict thunderstorms during afternoons (1200 UTC to 1800 UTC). Frequently selected covariates for the nonlinear terms are variants of convective precipitation, convective potential available energy, relative humidity and temperature in the mid layers of the troposphere, among others. All models, even for a lead time of five days, outperform a forecast based on climatology in an out-of-sample comparison. An example case illustrates that coarse spatial patterns are already successfully forecast five days ahead. |
Keywords: | lightning detection data, statistical post-processing, generalized additive models, gradient boosting, stability selection, MCMC |
JEL: | C11 C53 Q54 |
Date: | 2017–12 |
URL: | http://d.repec.org/n?u=RePEc:inn:wpaper:2017-25&r=cmp |