nep-cmp New Economics Papers
on Computational Economics
Issue of 2009‒06‒17
seven papers chosen by
Stan Miles
Thompson Rivers University

  2. Understanding Italian inequality trends: a simulation-based decomposition By Carlo Vittorio FIORIO
  3. The Distributional Effects of Tax-benefit Policies under New Labour: A Shapley Decomposition By Olivier Bargain
  4. Choosing the extent of private participation in public services: A computable general equilibrium perspective By Chisari, Omar O.; Lambardi, Germán D.; Romero, Carlos A.
  5. Solving stochastic multi-objective programming through the GP model By Belaid AOUNI; Cinzia COLAPINTO; Davide LA TORRE
  6. Predictability of Equity Models By Valls Pereira, Pedro L.; Chicaroli, Rodrigo
  7. Computing DSGE Models with Recursive Preferences By Dario Caldara; Jesus Fernandez-Villaverde; Juan F. Rubio-Ramirez; Wen Yao

  1. By: Roberto Patuelli (University of Lugano, Switzerland and The Rimini Centre for Economic Analysis, Italy); Aura Reggiani (University of Bologna, Italy); Peter Nijkamp (VU University Amsterdam, The Netherlands); Norbert Schanne (Institute for Employment Research (IAB), Nuremberg, Germany)
    Abstract: In this paper, we present a review of various computational experiments – and consequent results – concerning Neural Network (NN) models developed for regional employment forecasting. NNs are widely used in several fields because of their flexible specification structure. Their utilization in studying/predicting economic variables, such as employment or migration, is justified by the ability of NNs of learning from data, in other words, of finding functional relationships – by means of data – among the economic variables under analysis. A series of NN experiments is presented in the paper. Using two data sets on German NUTS 3 districts (326 and 113 labour market districts in the former West and East Germany, respectively), the results emerging from the implementation of various NN models – in order to forecast variations in full-time employment – are provided and discussed In our approach, single forecasts are computed by the models for each district. Different specifications of the NN models are first tested in terms of: (a) explanatory variables; and (b) NN structures. The average statistical results of simulated out-of-sample forecasts on different periods are summarized and commented on. In addition to variable and structure specification, the choice of NN learning parameters and internal functions is also critical to the success of NNs. Comprehensive testing of these parameters is, however, limited in the literature. A sensitivity analysis is therefore carried out and discussed, in order to evaluate different combinations of NN parameters. The paper concludes with methodological and empirical remarks, as well as with suggestions for future research.
    Date: 2009–01
  2. By: Carlo Vittorio FIORIO
    Abstract: By using simulation-based inequality decomposition methods, this paper analyses the peculiar trend of household equivalent income inequality in Italy between 1977 and 2004 providing a unifying framework for analysing changes of income factor distributions at the individual level, income aggregation within a household and changes of socio-demographic characteristics of the population. Changes in the distributions of employment and self-employment income explain most of the downward trend between 1977 and 1991 and of the upward trend between 1991 and 2004. Among the socio-demographic changes considered, the increased probability of earning spouse households is found to have the most relevant effect, which consistently acted towards the increase of household equivalent income inequality. Changes in the distribution of pension income had an equalising effect in all periods considered.
    Keywords: Inequality trends, simulation, counterfactual analysis.
    JEL: D31 D63 C51
    Date: 2008–07–12
  3. By: Olivier Bargain (University College Dublin)
    Abstract: Using counterfactual microsimulations, Shapley decompositions of time change in inequality and poverty indices make it possible to disentangle and quantify the relative effect of tax-benefit policy changes, compared to all other effects including shifts in the distribution of market income. Using this approach also helps to clarify the different issues underlying the distributional evaluation of policy reforms. An application to the UK (1998-2001) confirms previous findings that inequality and depth of poverty would have increased under the first New Labour government, had important reforms like the extensions of income support and tax credits not been implemented. These reforms have also contributed to substantially reduce poverty among families with children and pensioners.
    Keywords: Tax-benefit policy; inequality; poverty; Shapley decomposition; microsimulation
    JEL: H23 H53 I32
    Date: 2009–06–10
  4. By: Chisari, Omar O.; Lambardi, Germán D.; Romero, Carlos A.
    Abstract: What determines the propensity to reduce or widen the extent of public ownership? Why has there been a tendency to privatise and concede public utilities during the nineties? The answers to these questions depend both on macroeconomic and microeconomic considerations. And correct answers could also help to avoid or prevent inefficient reversals and frustrations that jeopardize reform processes. An alternative perspective, that combines micro and macro arguments, is given by general equilibrium models. The objective of this paper is to explore the rationality of the decision of choosing the implicit “technologies” of private and public operators of utilities in an economy that has fiscal budget and trade balance in equilibrium. The simulations confirm that the choice of the technology to be used for servicing infrastructure depends on deep parameters of efficiency and costs. The model shows that there are plausible scenarios where the selection is not unique.
    Keywords: Computable General Equilibrium; Trade balance; public services
    JEL: C68 F32 L97
    Date: 2007–09
  5. By: Belaid AOUNI; Cinzia COLAPINTO; Davide LA TORRE
    Abstract: The aim of this paper is to present an approach for solving the Stochastic Multi-Objective Programming (SMOP) through the Goal Programming (GP) model. We introduce a deterministic equivalent formulation and we show how GP can provide solutions to SMOP. The proposed method will be illustrated through a numerical example from the Tunisian stock exchange market.
    Keywords: Stochastic Multi-Objective Programming, Goal Programming
    Date: 2008–06–13
  6. By: Valls Pereira, Pedro L.; Chicaroli, Rodrigo
    Abstract: In this study, we verify the existence of predictability in the Brazilian equity market. Unlike other studies in the same sense, which evaluate original series for each stock, we evaluate synthetic series created on the basis of linear models of stocks. Following Burgess (1999), we use the “stepwise regression” model for the formation of models of each stock. We then use the variance ratio profile together with a Monte Carlo simulation for the selection of models with potential predictability. Unlike Burgess (1999), we carry out White’s Reality Check (2000) in order to verify the existence of positive returns for the period outside the sample. We use the strategies proposed by Sullivan, Timmermann & White (1999) and Hsu & Kuan (2005) amounting to 26,410 simulated strategies. Finally, using the bootstrap methodology, with 1,000 simulations, we find strong evidence of predictability in the models, including transaction costs
    Keywords: predictability; variance ratio profile; Monte Carlo simulation; reality check; bootstrap; technical analysis
    JEL: C15 G10
    Date: 2009–01
  7. By: Dario Caldara (Institute of International Economic Studies, Stockholm University); Jesus Fernandez-Villaverde (Department of Economics, University of Pennsylvania); Juan F. Rubio-Ramirez (Department of Economics, Duke University); Wen Yao (Department of Economics, University of Pennsylvania)
    Abstract: This paper compares different solution methods for computing the equilibrium of dynamic stochastic general equilibrium (DSGE) models with recursive preferences such as those in Epstein and Zin (1989 and 1991). Models with these preferences have recently become popular, but we know little about the best ways to implement them numerically. To fill this gap, we solve the stochastic neoclassical growth model with recursive preferences using four different approaches: second and third-order perturbation, Chebyshev polynomials, and value function iteration. We document the performance of the methods in terms of computing time, implementation complexity, and accuracy. Our main finding is that a third-order perturbation is competitive in terms of accuracy with Chebyshev polynomials and value function iteration, while being an order of magnitude faster to run. Therefore, we conclude that perturbation methods are an attractive approach for computing this class of problems.
    Keywords: DSGE Models, Recursive Preferences, Perturbation
    JEL: C63 C68 E37
    Date: 2009–05–25

This nep-cmp issue is ©2009 by Stan Miles. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.