nep-cmp New Economics Papers
on Computational Economics
Issue of 2017‒10‒22
nine papers chosen by

  1. Balancing the Equity-efficiency Trade-off in Personal Income Taxation: An Evolutionary Approach By Simone Pellegrino; Guido Perboli; Giovanni Squillero
  2. "State Space Approach to Adaptive Fuzzy Modeling: Application to Financial Investment" By Masafumi Nakano; Akihiko Takahashi; Soichiro Takahashi
  3. Fiscal stabilisation in the Euro-Area: A simulation exercise By Nicolas Carnot; Magdalena Kizior; Gilles Mourre
  4. Tax-benefit microsimulation and income redistribution in Ecuador By H. Xavier Jara; Marcelo Varela
  5. Forecasting Across Time Series Databases using Long Short-Term Memory Networks on Groups of Similar Series By Kasun Bandara; Christoph Bergmeir; Slawek Smyl
  6. Validation of Agent-Based Models in Economics and Finance By Giorgio Fagiolo; Mattia Guerini; Francesco Lamperti; Alessio Moneta; Andrea Roventini
  7. Stochastic Gradient Descent in Continuous Time: A Central Limit Theorem By Justin Sirignano; Konstantinos Spiliopoulos
  8. Per Capita Income and the Demand for Skills By Justin Caron; Thibault Fally; James R. Markusen
  9. Asymptotic Expansion as Prior Knowledge in Deep Learning Method for high dimensional BSDEs By Masaaki Fujii; Akihiko Takahashi; Masayuki Takahashi

  1. By: Simone Pellegrino (Department of Economics and Statistics (Dipartimento di Scienze Economico-Sociali e Matematico-Statistiche), University of Torino, Italy); Guido Perboli (Department of Control and Computer Engineering, Politecnico di Torino, Italy); Giovanni Squillero (Department of Control and Computer Engineering, Politecnico di Torino, Italy)
    Abstract: In this paper we propose a multi-objective evolutionary algorithm for supporting the definition of a personal income tax reform. As a case study, we apply this methodology to the Italian income tax, and consider a recently implemented tax cut. Our optimization algorithm is able to determine a set of tax structures that maximize the redistributive effect of the tax while minimizing its inefficiency - considering for the former the Reynolds-Smolensky index and for the latter the weighted average of taxpayers' effective marginal tax rates. The approach also takes into account two additional factors: the tax has to guarantee a specific revenue and to minimize the share of losing taxpayers with respect to the pre-reform situation. Experimental results clearly demonstrate that the methodology we employ can support the policy-maker's decisions in complex, real- world situations.
    Keywords: Personal Income Tax, Evolutionary Algorithms, Multi-Objective Optimization
    JEL: H23 H24
    Date: 2017–10
  2. By: Masafumi Nakano (Graduate School of Economics, The University of Tokyo); Akihiko Takahashi (Faculty of Economics, The University of Tokyo); Soichiro Takahashi (Graduate School of Economics, The University of Tokyo)
    Abstract: This paper proposes a new state space approach to adaptive fuzzy modeling under the dynamic environment, where Bayesian filtering sequentially learns the model parameters including model structures themselves as state variables. In particular, our approach specifies the state transitions as meanreversion processes, which intends to incorporate and extend the established state-of-art learning techniques as follows: First, the mean-reversion levels of model parameters are determined by applying some existing learning method to a training period. Next, filtering implementation over test data enables on-line estimation of the parameters, where the estimates are adaptively tuned for each new data arrival based on the obtained reliable learning result. In this work, we concretely design a Takagi-Sugeno- Kang fuzzy model for financial investment, whose parameters follow autoregressive processes with the mean-reversion levels decided by particle swarm optimization. Since there exist Monte Carlo simulation-based algorithms called particle filtering, our methodology is applicable to a quite general setting including non-linearity, which actually arises in our investment problem. Then, an out-of-sample numerical experiment with security price data successfully demonstrates its effectiveness.
    Date: 2017–10
  3. By: Nicolas Carnot; Magdalena Kizior; Gilles Mourre
    Abstract: This paper simulates a euro area stabilisation instrument that addresses some concerns often levied against such ideas. The simulation uses a 'double condition' over observed unemployment rates for triggering the payments to, as well as the contributions from, participating Member States. The functioning is symmetric between good and bad times and includes a form of experience rating as a further safeguard. The behaviour of the fund is assessed with simulations over the past three decades and with 'real time' simulations dating from the euro’s inception as a crucial robustness check. The simulations show that a significant and timely degree of stabilisation can be achieved, complementing national stabilisers without introducing permanent transfers or increasing overall debt. The paper also explores variants of the basic scheme including the introduction of a threshold for restricting the activity of the fund to large shocks.
    Keywords: Macroeconomic stabilisation; risk-sharing; income smoothing; fiscal stabilisers; transfer scheme
    JEL: E61 E62 F36 F42 H77
    Date: 2017–10–16
  4. By: H. Xavier Jara; Marcelo Varela
    Abstract: The aim of this paper is to explore the redistributive effects of taxes and benefits in Ecuador using two different approaches: direct use of reported taxes and benefits in household survey data, and use of simulated taxes and benefits obtained from ECUAMOD, the tax-benefit microsimulation model for Ecuador. Our results show that simulated taxes and social insurance contributions capture better the number of taxpayers and aggregate revenue amounts from official statistics than information taken directly from the data. Moreover, using reported data on taxes and social insurance contributions underestimates their redistributive effect in comparison with simulated policies. We discuss factors behind the differences between the two approaches and conclude with a discussion of the advantages offered by microsimulation for policy analysis.
    Date: 2017
  5. By: Kasun Bandara; Christoph Bergmeir; Slawek Smyl
    Abstract: With the advent of Big Data, nowadays in many applications databases containing large quantities of similar time series are available. Forecasting time series in these domains with traditional univariate forecasting procedures leaves great potentials for producing accurate forecasts untapped. Recurrent neural networks, and in particular Long Short-Term Memory (LSTM) networks have proven recently that they are able to outperform state-of-the-art univariate time series forecasting methods in this context, when trained across all available time series. However, if the time series database is heterogeneous accuracy may degenerate, so that on the way towards fully automatic forecasting methods in this space, a notion of similarity between the time series needs to be built into the methods. To this end, we present a prediction model using LSTMs on subgroups of similar time series, which are identified by time series clustering techniques. The proposed methodology is able to consistently outperform the baseline LSTM model, and it achieves competitive results on benchmarking datasets, in particular outperforming all other methods on the CIF2016 dataset.
    Date: 2017–10
  6. By: Giorgio Fagiolo; Mattia Guerini; Francesco Lamperti; Alessio Moneta; Andrea Roventini
    Abstract: Since the influential survey by Windrum et al. (2007), research on empirical validation of agent-based models in economics has made substantial advances, thanks to a constant flow of high-quality contributions. This Chapter attempts to take stock of such recent literature to offer an updated critical review of existing validation techniques. We sketch a simple theoretical framework that conceptualizes existing validation approaches, which we discuss along three different dimensions: (i) comparison between artificial and real-world data; (ii) calibration and estimation of model parameters; and (iii) parameter space exploration.
    Keywords: agent based models, validation, calibration, sensitivity analysis, parameter space exploration
    Date: 2017–09–20
  7. By: Justin Sirignano; Konstantinos Spiliopoulos
    Abstract: Stochastic gradient descent in continuous time (SGDCT) provides a computationally efficient method for the statistical learning of continuous-time models, which are widely used in science, engineering, and finance. The SGDCT algorithm follows a (noisy) descent direction along a continuous stream of data. The parameter updates occur in continuous time and satisfy a stochastic differential equation. This paper analyzes the asymptotic convergence rate of the SGDCT algorithm by proving a central limit theorem for strongly convex objective functions and, under slightly stronger conditions, for non-convex objective functions as well. An L$^p$ convergence rate is also proven for the algorithm in the strongly convex case.
    Date: 2017–10
  8. By: Justin Caron; Thibault Fally; James R. Markusen
    Abstract: Almost all of the literature about the growth of income inequality and the relationship between skilled and unskilled wages approaches the issue from the production side of general equilibrium (skill-biased technical change, international trade). Here, we add a role for income-dependent demand interacted with factor intensities in production. We explore how income growth and trade liberalization inuence the demand for skilled labor when preferences are non-homothetic and income-elastic goods are more intensive in skilled labor, an empirical regularity documented in Caron, Fally and Markusen (2014). In one experiment, counterfactual simulations show that sectorneutral productivity growth, which generates shifts in consumption towards skill-intensive goods, leads to significant increases in the skill premium: in developing countries, a one percent increase in productivity leads to a 0.1 to 0.25 percent increase in the skill premium. In several countries, including China and India, simulations suggest that the historical growth experienced in the last 25 years may have led to an increase in the skill premium of more than 10%. In a second experiment, we show that trade cost reductions generate quantitatively very different outcomes once we account for non-homothetic preferences. These imply substantially less predicted net factor content of trade and allow for a shift in consumption patterns caused by trade-induced income growth. Overall, the negative effect of trade cost reductions on the skill premium predicted for developing countries under homothetic preferences (Stolper-Samuelson) is strongly mitigated, and sometimes reversed.
    Keywords: non-homothetic preferences, skill premium, per capita income, internatonal trade
    JEL: F10 O10 F16 J31
    Date: 2017
  9. By: Masaaki Fujii; Akihiko Takahashi; Masayuki Takahashi
    Abstract: We demonstrate that the use of asymptotic expansion as prior knowledge in the "deep BSDE solver", which is a deep learning method for high dimensional BSDEs proposed by Weinan E, Han & Jentzen (2017), drastically reduces the loss function and accelerates the speed of convergence. We illustrate the technique and its implications using Bergman's model with different lending and borrowing rates and a class of quadratic-growth BSDEs.
    Date: 2017–10

General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.