nep-cmp New Economics Papers
on Computational Economics
Issue of 2017‒07‒23
eleven papers chosen by



  1. Environmental impact assessment for climate change policy with the simulation-based integrated assessment model E3ME-FTT-GENIE By J-F Mercure; H. Pollitt; N. R. Edwards; P. B. Holden; U. Chewpreecha; P. Salas; A. Lam; F. Knobloch; J. Vinuales
  2. Harrodian instability in decentralized economies: an agent-based approach By Emanuele Russo
  3. Modeling Economic Systems as Locally-Constructive Sequential Games By Tesfatsion, Leigh
  4. Sequence Classification of the Limit Order Book using Recurrent Neural Networks By Matthew F Dixon
  5. The R package MitISEM: Efficient and robust simulation procedures for Bayesian inference By Nalan Basturk; Stefano Grassi; Lennart Hoogerheide; Anne Opschoor; Herman K. van Dijk
  6. Social protection investments, human capital, and income growth: Simulating the returns to social cash transfers in Uganda By Dietrich, Stephan; Malerba, Daniele; Barrientos, Armando; Gassmann, Franziska; Mohnen, Pierre; Tirivayi, Nyasha; Kavuma, Susan; Matovu, Fred
  7. "Investment with deep learning" (in Japanese) This paper considers investment methods with deep learning in neural networks. In particular, as one can create various investment strategies by different specifications of a loss function, the current work presents two examples based on return anomalies detected by supervised deep learning (SL) or profit maximization by deep reinforcement learning (RL). It also applies learning of individual asset return dynamics to portfolio strategies. Moreover, it turns out that the investment performance are quite sensitive to exogenously specified items such as frequency of input data(e.g.monthly or daily returns), selection of a learning method, update of learning, number of layers in a network and number of units in intermediate layers. Especially, RL provides relatively fine records in portfolio investment, where a further analysis implies the possibility of better performance by reduction in number of units from the input to intermediate layers. By Takaya Fukui; Akihiko Takahashi
  8. An empirical validation protocol for large-scale agent-based models By Sylvain Barde; Sander van der Hoog
  9. Herding, minority game, market clearing and efficient markets in a simple spin model framework By Kristoufek, Ladislav; Vošvrda, Miloslav S.
  10. Regulatory Learning: how to supervise machine learning models? An application to credit scoring By Dominique Guegan; Bertrand Hassani
  11. Machine learning application in online lending risk prediction By Xiaojiao Yu

  1. By: J-F Mercure; H. Pollitt; N. R. Edwards; P. B. Holden; U. Chewpreecha; P. Salas; A. Lam; F. Knobloch; J. Vinuales
    Abstract: A high degree of consensus exists in the climate sciences over the role that human interference with the atmosphere is playing in changing the climate. Following the Paris Agreement, a similar consensus exists in the policy community over the urgency of policy solutions to the climate problem. The context for climate policy is thus moving from agenda setting, which has now been established, to impact assessment, in which we identify policy pathways to implement the Paris Agreement. Most integrated assessment models currently used to address the economic and technical feasibility of avoiding climate change are based purely on engineering with a normative systems optimisation philosophy, and are thus unsuitable to assess the socio-economic impacts of realistic baskets of climate policies. Here, we introduce a fully descriptive simulation-based integrated assessment model designed specifically to assess policies, formed by the combination of (1) a highly disaggregated macro-econometric simulation of the global economy based on time series regressions (E3ME), (2) a family of bottom-up evolutionary simulations of technology diffusion based on cross-sectional discrete choice models (FTT), and (3) a carbon cycle and atmosphere circulation model of intermediate complexity (GENIE-1). We use this combined model to create a detailed global and sectoral policy map and scenario that achieves the goals of the Paris Agreement with 80% probability of not exceeding 2{\deg}C of global warming. We propose a blueprint for a new role for integrated assessment models in this upcoming policy assessment context.
    Date: 2017–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1707.04870&r=cmp
  2. By: Emanuele Russo
    Abstract: This paper presents a small-scale agent-based extension of the so-called neo-Kaleckian model. The aim is to investigate the emergence of Harrodian instability in decentralized market economies. We introduce a parsimonious microfoundation of investment decisions. Agents have heteroge- neous expectations about demand growth and set idiosyncratically their investment expendi- tures. Interactions occur through demand externalities. We simulate the model under different scenarios. First, when heterogeneity is ruled out, Harrodian instability is showed to emerge as for the aggregate model. Instead, when heterogeneity is accounted for, a stable dynamics with endogenous fluctuations arises. At the same time, in this second scenario, all the Keynesian implications are preserved, including the presence of macroeconomic paradoxes. Sensitivity analysis confirms the general robustness of our results and the logical consistency of the model.
    Keywords: Harrodian Instability, Agent-Based Models, Coordination Failures, Heterogeneous Expectations, Neo-Kaleckian model
    Date: 2017–07–13
    URL: http://d.repec.org/n?u=RePEc:ssa:lemwps:2017/17&r=cmp
  3. By: Tesfatsion, Leigh
    Abstract: Real-world economies are open-ended dynamic systems consisting of heterogeneous interacting participants. Human participants are decision-makers who strategically take into account the past actions and potential future actions of other participants. All participants are forced to be locally constructive, meaning their actions at any given time must be based on their local states; and participant actions at any given time affect future local states. Taken together, these properties imply real-world economies are locally-constructive sequential games. This study discusses a modeling approach, agent-based computational economics (ACE), that permits researchers to study economic systems from this point of view. ACE modeling principles and objectives are first concisely presented. The remainder of the study then highlights challenging issues and edgier explorations that ACE researchers are currently pursuing.
    Date: 2017–07–11
    URL: http://d.repec.org/n?u=RePEc:isu:genstf:201707110700001022&r=cmp
  4. By: Matthew F Dixon
    Abstract: Recurrent neural networks (RNNs) are types of artificial neural networks (ANNs) that are well suited to forecasting and sequence classification. They have been applied extensively to forecasting univariate financial time series, however their application to high frequency trading has not been previously considered. This paper solves a sequence classification problem in which a short sequence of observations of limit order book depths and market orders is used to predict a next event price-flip. The capability to adjust quotes according to this prediction reduces the likelihood of adverse price selection. Our results demonstrate the ability of the RNN to capture the non-linear relationship between the near-term price-flips and a spatio-temporal representation of the limit order book. The RNN compares favorably with other classifiers, including a linear Kalman filter, using S&P500 E-mini futures level II data over the month of August 2016. Further results assess the effect of retraining the RNN daily and the sensitivity of the performance to trade latency.
    Date: 2017–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1707.05642&r=cmp
  5. By: Nalan Basturk (Maastricht University & RCEA); Stefano Grassi (University of Rome “Tor Vergata”); Lennart Hoogerheide (Vrije Universiteit Amsterdam & Tinbergen Institute); Anne Opschoor (Vrije Universiteit Amsterdam & Tinbergen Institute); Herman K. van Dijk (Erasmus University Rotterdam, Norges Bank (Central Bank of Norway) & Tinbergen Institute & RCEA)
    Abstract: This paper presents the R package MitISEM (mixture of t by importance sampling weighted expectation maximization) which provides an automatic and flexible two-stage method to approximate a non-elliptical target density kernel – typically a posterior density kernel – using an adaptive mixture of Student-t densities as approximating density. In the first stage a mixture of Student-t densities isfitted to the target using an expectation maximization algorithm where each step of the optimization procedure is weighted using importance sampling. In the second stage this mixture density is a candidate density for efficient and robust application of importance sampling or the Metropolis-Hastings (MH) method to estimate properties of the target distribution. The package enables Bayesian inference and prediction on model parameters and probabilities, in particular, for models where densities have multi-modal or other non-elliptical shapes like curved ridges. These shapes occur in research topics in several scientific fields. For instance, analysis of DNA data in bio-informatics, obtaining loans in the banking sector by heterogeneous groups in financial economics and analysis of education's effect on earned income in labor economics. The package MitISEM provides also an extended algorithm, 'sequential MitISEM', which substantially decreases computation time when the target density has to be approximated for increasing data samples. This occurs when the posterior or predictive density is updated with new observations and/or when one computes model probabilities using predictive likelihoods. We illustrate the MitISEM algorithm using three canonical statistical and econometric models that are characterized by several types of non-elliptical posterior shapes and that describe well-known data patterns in econometrics and finance. We show that MH using the candidate density obtained by MitISEM outperforms, in terms of numerical efficiency, MH using a simpler candidate, as well as the Gibbs sampler. The MitISEM approach is also used for Bayesian model comparison using predictive likelihoods.
    Keywords: Finite mixtures, Student-t densities, importance sampling, MCMC, MetropolisHastings algorithm, expectation maximization, Bayesian inference, R software
    Date: 2017–06–26
    URL: http://d.repec.org/n?u=RePEc:bno:worpap:2017_10&r=cmp
  6. By: Dietrich, Stephan (UNU-MERIT, and Maastricht University); Malerba, Daniele (GDI, University of Manchester); Barrientos, Armando (GDI, University of Manchester); Gassmann, Franziska (UNU-MERIT, and Maastricht University); Mohnen, Pierre (UNU-MERIT, and Maastricht University); Tirivayi, Nyasha (UNU-MERIT, and Maastricht University); Kavuma, Susan (Makerere University); Matovu, Fred (Makerere University)
    Abstract: In this paper we assess the short- and mid-term effects of two cash transfer programmes in Uganda in terms of child underweight, school attainment, and the monetary returns to these indirect effects. Using a micro-simulation approach we test how the scale-up of these pilot interventions could affect human capital indicators and income growth. We first use panel data to estimate the links between income, child health, and school attainment. Thereafter we insert the estimates in a micro-simulation model to predict how cash transfer programmes could generate income returns through higher education attainment and compare programmes in terms of their rates of return.
    Keywords: Cash Transfer, Uganda, Education, Child Health, Simulation
    JEL: I25 I26 I15 H54 O15
    Date: 2017–06–22
    URL: http://d.repec.org/n?u=RePEc:unm:unumer:2017029&r=cmp
  7. By: Takaya Fukui (Mizuho Securities Co., Ltd.); Akihiko Takahashi (Faculty of Economics, University of Tokyo)
    URL: http://d.repec.org/n?u=RePEc:tky:jseres:2017cj287&r=cmp
  8. By: Sylvain Barde; Sander van der Hoog
    Abstract: Despite recent advances in bringing agent-based models (ABMs) to the data, the estimation or calibration of model parameters remains a challenge, especially when it comes to large-scale agent-based macroeconomic models. Most methods, such as the method of simulated moments (MSM), require in-the-loop simulation of new data, which may not be feasible for such computationally heavy simulation models. The purpose of this paper is to provide a proof-of-concept of a generic empirical validation methodology for such large-scale simulation models. We introduce an alternative 'large-scale' empirical validation approach, and apply it to the Eurace@Unibi macroeconomic simulation model (Dawid et al., 2016). This model was selected because it displays strong emergent behaviour and is able to generate a wide variety of nonlinear economic dynamics, including endogenous business- and financial cycles. In addition, it is a computationally heavy simulation model, so it ts our targeted use-case. The validation protocol consists of three stages. At the first stage we use Nearly-Orthogonal Latin Hypercube sampling (NOLH) in order to generate a set of 513 parameter combinations with good space-filling properties. At the second stage we use the recently developed Markov Information Criterion (MIC) to score the simulated data against empirical data. Finally, at the third stage we use stochastic kriging to construct a surrogate model of the MIC response surface, resulting in an interpolation of the response surface as a function of the parameters. The parameter combinations providing the best fit to the data are then identified as the local minima of the interpolated MIC response surface. The Model Confidence Set (MCS) procedure of Hansen et al. (2011) is used to restrict the set of model calibrations to those models that cannot be rejected to have equal predictive ability, at a given confidence level. Validation of the surrogate model is carried out by re-running the second stage of the analysis on the so identified optima and cross-checking that the realised MIC scores equal the MIC scores predicted by the surrogate model. The results we obtain so far look promising as a first proof-of-concept for the empirical validation methodology since we are able to validate the model using empirical data series for 30 OECD countries and the euro area. The internal validation procedure of the surrogate model also suggests that the combination of NOLH sampling, MIC measurement and stochastic kriging yields reliable predictions of the MIC scores for samples not included in the original NOLH sample set. In our opinion, this is a strong indication that the method we propose could provide a viable statistical machine learning technique for the empirical validation of (large-scale) ABMs.
    Keywords: Statistical machine learning; surrogate modelling; empirical validation
    Date: 2017–07
    URL: http://d.repec.org/n?u=RePEc:ukc:ukcedp:1712&r=cmp
  9. By: Kristoufek, Ladislav; Vošvrda, Miloslav S.
    Abstract: We present a novel approach towards the financial Ising model. Most studies utilize the model to find settings which generate returns closely mimicking the financial stylized fact such as fat tails, volatility clustering and persistence, and others. We tackle the model utility from the other side and look for the combination of parameters which yields return dynamics of the efficient market in the view of the efficient market hypothesis. Working with the Ising model, we are able to present nicely interpretable results as the model is based on only two parameters. Apart from showing the results of our simulation study, we offer a new interpretation of the Ising model parameters via inverse temperature and entropy. We show that in fact market frictions (to a certain level) and herding behavior of the market participants do not go against market efficiency but what is more, they are needed for the markets to be efficient.
    Keywords: Ising model,efficient market hypothesis,Monte Carlo simulation
    JEL: G02 G14 G17
    Date: 2016
    URL: http://d.repec.org/n?u=RePEc:zbw:fmpwps:68&r=cmp
  10. By: Dominique Guegan (Centre d'Economie de la Sorbonne and LabEx ReFi); Bertrand Hassani (Group Capgemini and Centre d'Economie de la Sorbonne and LabEx ReFi)
    Abstract: The arrival of big data strategies is threatening the lastest trends in financial regulation related to the simplification of models and the enhancement of the comparability of approaches chosen by financial institutions. Indeed, the intrinsic dynamic philosophy of Big Data strategies is almost incompatible with the current legal and regulatory framework as illustrated in this paper. Besides, as presented in our application to credit scoring, the model selection may also evolve dynamically forcing both practitioners and regulators to develop libraries of models, strategies allowing to switch from one to the other as well as supervising approaches allowing financial institutions to innovate in a risk mitigated environment. The purpose of this paper is therefore to analyse the issues related to the Big Data environment and in particular to machine learning models highlighting the issues present in the current framework confronting the data flows, the model selection process and the necessity to generate appropriate outcomes
    Keywords: Financial Regulation; Algorithm; Big Data; Risk
    JEL: C55
    Date: 2017–07
    URL: http://d.repec.org/n?u=RePEc:mse:cesdoc:17034&r=cmp
  11. By: Xiaojiao Yu
    Abstract: Online leading has disrupted the traditional consumer banking sector with more effective loan processing. Risk prediction and monitoring is critical for the success of the business model. Traditional credit score models fall short in applying big data technology in building risk model. In this manuscript, data with various format and size were collected from public website, third-parties and assembled with client's loan application information data. Ensemble machine learning models, random forest model and XGBoost model, were built and trained with the historical transaction data and subsequently tested with separate data. XGBoost model shows higher K-S value, suggesting better classification capability in this task. Top 10 important features from the two models suggest external data such as zhimaScore, multi-platform stacking loans information, and social network information are important factors in predicting loan default probability.
    Date: 2017–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1707.04831&r=cmp

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.