nep-cmp New Economics Papers
on Computational Economics
Issue of 2017‒12‒03
eleven papers chosen by



  1. Artificial Intelligence as Structural Estimation: Economic Interpretations of Deep Blue, Bonanza, and AlphaGo By Mitsuru Igami
  2. Estimation of agent-based models using sequential Monte Carlo methods By Lux, Thomas
  3. Economic Evaluation of Fuel Treatment Effectivness. Agent-Based Model Simulation of Fire Spreads Dynamics. By Fontana, Magda; Chersoni, Giulia
  4. Innovation, Finance, and Economic Growth: An Agent-Based Approach By Giorgio Fagiolo; Daniele Giachini; Andrea Roventini
  5. A Numerical Scheme for A Singular control problem: Investment-Consumption Under Proportional Transaction Costs By Arash Fahim; Wan-Yu Tsai
  6. Dual control Monte Carlo method for tight bounds of value function under Heston stochastic volatility model By Jingtang Ma; Wenyuan Li; Harry Zheng
  7. Quantization goes Polynomial By Giorgia Callegaro; Lucio Fiorin; Andrea Pallavicini
  8. Simulating the deep decarbonisation of residential heating for limiting global warming to 1.5C By Florian Knobloch; Hector Pollitt; Unnada Chewpreecha; Jean-Francois Mercure
  9. Artificial Intelligence and the Modern Productivity Paradox: A Clash of Expectations and Statistics By Erik Brynjolfsson; Daniel Rock; Chad Syverson
  10. WAVELET VARIANCE RATIO TEST AND WAVESTRAPPING FOR THE DETERMINATION OF THE COINTEGRATION RANK By Burak Eroglu
  11. Orthogonal Machine Learning: Power and Limitations By Lester Mackey; Vasilis Syrgkanis; Ilias Zadik

  1. By: Mitsuru Igami
    Abstract: Artificial intelligence (AI) has achieved superhuman performance in a growing number of tasks, including the classical games of chess, shogi, and Go, but understanding and explaining AI remain challenging. This paper studies the machine-learning algorithms for developing the game AIs, and provides their structural interpretations. Specifically, chess-playing Deep Blue is a calibrated value function, whereas shogi-playing Bonanza represents an estimated value function via Rust's (1987) nested fixed-point method. AlphaGo's "supervised-learning policy network" is a deep neural network (DNN) version of Hotz and Miller's (1993) conditional choice probability estimates; its "reinforcement-learning value network" is equivalent to Hotz, Miller, Sanders, and Smith's (1994) simulation method for estimating the value function. Their performances suggest DNNs are a useful functional form when the state space is large and data are sparse. Explicitly incorporating strategic interactions and unobserved heterogeneity in the data-generating process would further improve AIs' explicability.
    Date: 2017–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1710.10967&r=cmp
  2. By: Lux, Thomas
    Abstract: Estimation of agent-based models is currently an intense area of research. Recent contributions have to a large extent resorted to simulation-based methods mostly using some form of simulated method of moments estimation (SMM). There is, however, an entire branch of statistical methods that should appear promising, but has to our knowledge never been applied so far to estimate agent-based models in economics and finance: Markov chain Monte Carlo methods designed for state space models or models with latent variables. This later class of models seems particularly relevant as agent-based models typically consist of some latent and some observable variables since not all the characteristics of agents would mostly be observable. Indeed, one might often not only be interested in estimating the parameters of a model, but also to infer the time development of some latent variable. However, agent-based models when interpreted as latent variable models would be typically characterized by non-linear dynamics and non-Gaussian fluctuations and, thus, would require a computational approach to statistical inference. Here we resort to Sequential Monte Carlo (SMC) estimation based on a particle filter. This approach is used here to numerically approximate the conditional densities that enter into the likelihood function of the problem. With this approximation we simultaneously obtain parameter estimates and filtered state probabilities for the unobservable variable(s) that drive(s) the dynamics of the observable time series. In our examples, the observable series will be asset returns (or prices) while the unobservable variables will be some measure of agents' aggregate sentiment. We apply SMC to two selected agent-based models of speculative dynamics with somewhat different flavor. The empirical application to a selection of financial data includes an explicit comparison of the goodness-of-fit of both models.
    Keywords: agent-based models,estimation,Markov chain Monte Carlo,particle filter
    JEL: G12 C15 C58
    Date: 2017
    URL: http://d.repec.org/n?u=RePEc:zbw:cauewp:201707&r=cmp
  3. By: Fontana, Magda; Chersoni, Giulia (University of Turin)
    Abstract: The paper assess the effectiveness of a fuel management treatment by modeling the main fire regime drivers through a spatially explicit fire disturbance agent-based model. It covers the interplay between spatial heterogeneity and neighboring interaction among the factors that drive fuel spread dynamics. Finally, it argues that fire prevention policy address growing fire risk exposure at a regional level.
    Date: 2017–09
    URL: http://d.repec.org/n?u=RePEc:uto:dipeco:201729&r=cmp
  4. By: Giorgio Fagiolo; Daniele Giachini; Andrea Roventini
    Abstract: This paper extends the endogenous-growth agent-based model in Fagiolo and Dosi (2003) to study the finance-growth nexus. We explore industries where firms produce a homogeneous good using existing technologies, perform R&D activities to introduce new techniques, and imitate the most productive practices. Unlike the original model, we assume that both exploration and imitation require resources provided by banks, which pool agent savings and finance new projects via loans. We find that banking activity has a positive impact on growth. However, excessive financialization can hamper growth. Indeed, we find a significant and robust inverted-U shaped relation between financial depth and growth. Overall, our results stress the fundamental (and still poorly understood) role played by innovation in the finance-growth nexus.
    Keywords: Agent-based Models, Innovation, Exploration vs. Exploitation, Endogenous Growth, Banking Sector, Finance-Growth Nexus
    Date: 2017–11–24
    URL: http://d.repec.org/n?u=RePEc:ssa:lemwps:2017/30&r=cmp
  5. By: Arash Fahim; Wan-Yu Tsai
    Abstract: This paper concerns the numerical solution of a fully nonlinear parabolic double obstacle problem arising from a finite portfolio selection with proportional transaction costs. We consider the optimal allocation of wealth among multiple stocks and a bank account in order to maximize the finite horizon discounted utility of consumption. The problem is mainly governed by a time-dependent Hamilton-Jacobi-Bellman equation with gradient constraints. We propose a numerical method which is composed of Monte Carlo simulation to take advantage of the high-dimensional properties and finite difference method to approximate the gradients of the value function. Numerical results illustrate behaviors of the optimal trading strategies and also satisfy all qualitative properties proved in Dai et al. (2009) and Chen and Dai (2013).
    Date: 2017–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1711.01017&r=cmp
  6. By: Jingtang Ma; Wenyuan Li; Harry Zheng
    Abstract: The aim of this paper is to study the fast computation of the lower and upper bounds on the value function for utility maximization under the Heston stochastic volatility model with general utility functions. It is well known there is a closed form solution of the HJB equation for power utility due to its homothetic property. It is not possible to get closed form solution for general utilities and there is little literature on the numerical scheme to solve the HJB equation for the Heston model. In this paper we propose an efficient dual control Monte Carlo method for computing tight lower and upper bounds of the value function. We identify a particular form of the dual control which leads to the closed form upper bound for a class of utility functions, including power, non-HARA and Yarri utilities. Finally, we perform some numerical tests to see the efficiency, accuracy, and robustness of the method. The numerical results support strongly our proposed scheme.
    Date: 2017–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1710.10487&r=cmp
  7. By: Giorgia Callegaro; Lucio Fiorin; Andrea Pallavicini
    Abstract: Quantization algorithms have been recently successfully adopted in option pricing problems to speed up Monte Carlo simulations thanks to the high convergence rate of the numerical approximation. In particular, recursive marginal quantization has been proven a flexible and versatile tool when applied to stochastic volatility processes. In this paper we apply for the first time these techniques to the family of polynomial processes, by exploiting, whenever possible, their peculiar properties. We derive theoretical results to assess the approximation errors, and we describe in numerical examples practical tools for fast exotic option pricing.
    Date: 2017–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1710.11435&r=cmp
  8. By: Florian Knobloch; Hector Pollitt; Unnada Chewpreecha; Jean-Francois Mercure
    Abstract: We take a simulation-based approach for modelling ten scenarios, aiming at near-zero global CO2 emissions by 2050 in the residential heating sector, using different combinations of policy instruments. Their effectiveness highly depends on behavioural decision-making by households, especially in a context of deep decarbonisation and rapid transformation. We therefore use the non-equilibrium bottom-up model FTT:Heat, which allows to simulate policy-induced technology transitions in a context of inertia and bounded rationality. Results show that a decarbonisation of residential heating is achievable until 2050, but requires substantial policy efforts from 2020 onwards. Due to long average lifetimes of heating equipment, the transition needs decades rather than years. Policy mixes are projected to be more effective for driving the market of new technologies, compared to the reliance on a carbon tax as the only policy instrument. In combination with subsidies for renewables, near-zero decarbonisation can be achieved with a residential carbon tax of 50-150Euro/tCO2. The policy-induced technology transition would increase heating costs faced by households initially, but lead to net savings in the medium term. From a global perspective, the decarbonisation largely depends on policy-implementation in Europe, North-America, China and Russia.
    Date: 2017–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1710.11019&r=cmp
  9. By: Erik Brynjolfsson; Daniel Rock; Chad Syverson
    Abstract: We live in an age of paradox. Systems using artificial intelligence match or surpass human level performance in more and more domains, leveraging rapid advances in other technologies and driving soaring stock prices. Yet measured productivity growth has declined by half over the past decade, and real income has stagnated since the late 1990s for a majority of Americans. We describe four potential explanations for this clash of expectations and statistics: false hopes, mismeasurement, redistribution, and implementation lags. While a case can be made for each, we argue that lags have likely been the biggest contributor to the paradox. The most impressive capabilities of AI, particularly those based on machine learning, have not yet diffused widely. More importantly, like other general purpose technologies, their full effects won’t be realized until waves of complementary innovations are developed and implemented. The required adjustment costs, organizational changes, and new skills can be modeled as a kind of intangible capital. A portion of the value of this intangible capital is already reflected in the market value of firms. However, going forward, national statistics could fail to measure the full benefits of the new technologies and some may even have the wrong sign.
    JEL: D2 O3 O4
    Date: 2017–11
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:24001&r=cmp
  10. By: Burak Eroglu (Istanbul Bilgi University)
    Abstract: In this paper, I propose a wavelet based cointegration test for fractionally integrated time series. This proposed test is non-parametric and asymptotically invariant to different forms of short run dynamics. The use of wavelets allows one to take advantage of the wavelet based bootstrapping method particularly known as wavestrapping. In this regard, I introduce a new wavestrapping algorithm for multivariate time series processes, specifically for cointegration tests. The Monte Carlo simulations indicate that this new wavestrapping procedure can alleviate the severe size distortions which are generally observed in cointegration tests with time series containing innovations that possess highly negative MA parameters. Additionally, I apply the proposed methodology to analyse the long run co-movements in the credit default swap market of European Union countries.
    Keywords: Fractional integration; Cointegration; Wavelet; Wavestrapping
    Date: 2017–11
    URL: http://d.repec.org/n?u=RePEc:bli:wpaper:1706&r=cmp
  11. By: Lester Mackey; Vasilis Syrgkanis; Ilias Zadik
    Abstract: Double machine learning provides $\sqrt{n}$-consistent estimates of parameters of interest even when high-dimensional or nonparametric nuisance parameters are estimated at an $n^{-1/4}$ rate. The key is to employ \emph{Neyman-orthogonal} moment equations which are first-order insensitive to perturbations in the nuisance parameters. We show that the $n^{-1/4}$ requirement can be improved to $n^{-1/(2k+2)}$ by employing a $k$-th order notion of orthogonality that grants robustness to more complex or higher-dimensional nuisance parameters. In the partially linear model setting popular in causal inference, we use Stein's lemma to show that we can construct second-order orthogonal moments if and only if the treatment residual is not normally distributed. We conclude by demonstrating the robustness benefits of an explicit doubly-orthogonal estimation procedure for treatment effect.
    Date: 2017–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1711.00342&r=cmp

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.