nep-cmp New Economics Papers
on Computational Economics
Issue of 2018‒04‒02
fourteen papers chosen by
Stan Miles
Thompson Rivers University

  1. Modelling stock correlations with expected returns from investors By Ming-Yuan Yang; Sai-Ping Li; Li-Xin Zhong; Fei Ren
  2. Optimal Dynamic Fiscal Policy with Endogenous Debt Limits By Yongyang Cai; Simon Scheidegger; Sevin Yeltekin; Philipp Renner; Kenneth Judd
  3. The use of hypothetical household data for policy learning – EUROMOD HHoT baseline indicators By Gasior, Katrin; Recchia, Pasquale
  4. Economic Impact of Typhoon Ondoy in Pasig and Marikina Cities Using a Multiweek CGE Model Analysis By Tuano, Philip Arnold P.; Muyrong, Marjorie S.; Clarete, Ramon L.
  5. Countercyclical Prudential Tools in an Estimated DSGE Model By Serafín Frache; Javier García-Cicco; Jorge Ponce
  6. Implementable tensor methods in unconstrained convex optimization By NESTEROV Yurii
  7. Stochastic Approximation Schemes for Economic Capital and Risk Margin Computations By David Barrera; Stéphane Crépey; Babacar Diallo; Gersende Fort; Emmanuel Gobet; Uladzislau Stazhynski
  8. Panic and propagation in 1873: a computational network approach By Daniel Ladley; Peter L. Rousseau
  9. Adverse Selection, Risk Sharing and Business Cycles By Marcelo Veracierto
  10. Efficient Pricing of Barrier Options on High Volatility Assets using Subset Simulation By Keegan Mendonca; Vasileios E. Kontosakos; Athanasios A. Pantelous; Konstantin M. Zuev
  11. Sustainability of the pension system in Macedonia: A comprehensive analysis and reform proposal with MK-PENS - dynamic microsimulation model By Blagica Petreski; Pavle Gacov
  12. Implementing Macroprudential Policy in NiGEM By Oriol Carreras; E Philip Davis; Ian Hurst; Iana Liadze; Rebecca Piggott; James Warren
  13. High Taxes on Cloudy Days: Dynamic State-Induced Price Components in Power Markets By Göke, Leonard; Madlener, Reinhard
  14. Social media bots and stock markets By Rui Fan; Oleksandr Talavera; Vu Tran

  1. By: Ming-Yuan Yang; Sai-Ping Li; Li-Xin Zhong; Fei Ren
    Abstract: Stock correlations is crucial to asset pricing, investor decision-making, and financial risk regulations. However, microscopic explanation based on agent-based modeling is still lacking. We here propose a model derived from minority game for modeling stock correlations, in which an agent's expected return for one stock is influenced by the historical return of the other stock. Each agent makes a decision based on his expected return with reference to information dissemination and the historical return of the stock. We find that the returns of the stocks are positively (negatively) correlated when agents' expected returns for one stock are positively (negatively) correlated with the historical return of the other. We provide both numerical simulations and analytical studies and give explanations to stock correlations for cases with agents having either homogeneous or heterogeneous expected returns. The result still holds when other factors such as holding decisions and external events are included which broadens the practicability of the model.
    Date: 2018–03
  2. By: Yongyang Cai (Ohio State University); Simon Scheidegger (University of Zürich); Sevin Yeltekin (Carnegie Mellon University); Philipp Renner (Lancaster University); Kenneth Judd (Stanford University)
    Abstract: Since the financial crisis of 2008 and increased government debt levels worldwide, fiscal austerity has been a focal point in public debates. Central to these debates is the natural debt limit, i.e. the level of public debt that's sustainable in the long run, and the design of fiscal policy that is consistent with that limit. In much of the earlier work on dynamic fiscal policy, governments are not allowed to lend, and the upper limit on debt is determined in an ad-hoc manner Aiyagari et. al (2002)'s (AMSS) seminal paper on fiscal policy in incomplete markets relaxed the lending assumption and revisited earlier work of Barro (1979) and Lucas and Stokey (1983) to study the implications on tax policy. Their results implied that taxes should roughly follow a random walk. They also presented examples where the long-run tax rate is zero, and any spending is financed out of its asset income (i.e., government holds debt of the people). However, their approach had some weaknesses. First, it imposed an artificial limit on government debt and therefore did not address the question of a natural debt limit. Second, it assumed, as much of the literature prior to it did, government spending to be exogenous. We relax the assumptions on debt and spending, and we use computational methods that do not rely only on local optimality. While we focus on the models examined in AMSS, we present a framework that can address fiscal policy issues in a self-consistent manner. In particular, we derive the endogenous limits on debt and allow for endogenous government spending. Our approach involves recasting the policy problem as an infinite horizon dynamic programming problem. The government's value function may not be concave and it can also very high curvature, particularly as debt approaches its endogenous limit. In dynamic taxation problems, the government's problem is a mathematical program with complementary constraints (MPCC). We explicitly use the MPCC formulation, which is essential in order to do the necessary global optimization analysis of the government's problem. Our MPCC approach uses the computational algorithms that were developed only in the past twenty years, and allows us to solve the problem reliably and accurately. Using our combination of computational tools and more general economic assumptions, we re-address questions regarding optimal taxation and debt management in a more realistic setup. These tools allow us to determine debt limits implied by assumptions on the primitives of the economic environment and to assess how the level of debt affects both tax policy and general economic performance, and the time series properties of tax rates and debt levels. Our results show that under the more general framework of endogenous government debt limits and spending has substantially different implications than earlier analyses have suggested. First, the behavior of optimal policy is, over long horizons (e.g., 1000 years), is much more complex than simpler models imply. In particular, the long-run distribution of debt is multimodal, and the long-run level of debt is history-dependent. If initial debt is low enough and government spending is not hit with large shocks, then the government will accumulate a "war chest" which allows long-run tax rates to be zero. However, if, in the same model, initial debt is high and/or the government gets hit with a long series of bad spending shocks, then debt will rise to a high level and will not fall even if the government is not hit with bad spending shocks. In the second case, governments with large debt levels will avoid default by reducing spending and use taxes to finance a persistently high debt. We examine the case of fixed government spending and find that the results are dramatically affected. In particular, we illustrate a case where if spending shocks are of moderate size (less than US historical experience) no positive level of debt is feasible. That is, if a government begins with positive debt then there is a sequence of spending shocks such that there is no feasible tax and borrowing policy to finance those expenditures. In such cases, exogenous spending assumptions imply that governments must have their endowed war chests in the beginning and cannot with probability one build up its war chest. These examples illustrate clearly that any analysis of fiscal policy that wants examine historical fiscal policy must consider making spending flexible. The application of our methodology is not limited to optimal tax problems. Optimal macroeconomic policy problems, as well as social insurance design typically involve solving high-dimensional dynamic programming problems. Solving such problems is a complicated, but very important task, as the policy recommendations depend crucially on the accuracy of the numerical results. In much of the optimal macroeconomic policy and social insurance literature, accuracy of the numerical solutions is unclear. Additionally, most solution approaches ignore feasibility issues and impose ad-hoc limits on state variables such as government debt. An accurate approach to solving dynamic policy models requires the ability to handle the high-dimensional nature of the problems as well as the unknown, feasible state space. The methodology offered in this paper can be used for computing high-dimensional dynamic policy problems with unknown state spaces.
    Date: 2017
  3. By: Gasior, Katrin; Recchia, Pasquale
    Abstract: Tax-benefit microsimulation models are typically used to assess the impact of policy changes on the income distribution based on micro data representative of the population. Such analysis assesses the effects of tax-benefit policies by considering their interaction effects and the population structure, which are both important elements for an overall assessment of complex realities. However, it can be helpful to abstract from this complexity and to explain the effects of tax-benefit policies using concrete examples. Using hypothetical households visualises how single policies are linked with each other while leaving the additional complexity of the population structure aside. This paper uses the Hypothetical Household Tool (HHoT) to generate hypothetical household data that can be used in EUROMOD, the tax and benefit microsimulation model of the European Union, to analyse current tax and benefit policies as well as the effects of policy changes in a comparative manner. The paper provides a brief introduction of the use of hypothetical data in general and presents concrete examples of its application. The main part proposes a set of basic indicators that can be used to learn about European tax-benefit systems in a comparative perspective.
    Date: 2018–03–19
  4. By: Tuano, Philip Arnold P.; Muyrong, Marjorie S.; Clarete, Ramon L.
    Abstract: The adverse effects of extreme flooding caused by Typhoon Ondoy in Pasig and Marikina Cities in 2009 are significant. This paper estimates that both cities may have lost PHP 22.54 billion, 90 percent of which represent the loss of Pasig City. The study’s estimates are obtained using a multiweek, local economy computable general equilibrium analysis which assumes weekly market clearing. Some suggestions for improving the methodology are provided.
    Keywords: Philippines, CGE, Typhoon Ondoy, Pasig City, Marikina City, computable general equilibrium model, equivalent variation, extreme flooding, climate-related disaster
    Date: 2018
  5. By: Serafín Frache (Banco Central del Uruguay y Departamento de Economía, Facultad de Ciencias Sociales, Universidad de la República); Javier García-Cicco (Banco Central de Chile y Universidad Católica Argentina); Jorge Ponce (Banco Central del Uruguay y Departamento de Economía, Facultad de Ciencias Sociales, Universidad de la República)
    Abstract: We develop a DSGE model for a small, open economy with a banking sector and endogenous default. The model is used to perform a realistic assessment of two macroprudential tools: countercyclical capital buffers (CCB) and dynamic provisions (DP). The model is estimated with data for Uruguay, where dynamic provisioning is in place since early 2000s. In general, while both tools force banks to build buffers, we find that DP seems to outperform the CCB in terms of smoothing the cycle. We also find that the source of the shock affecting the financial system matters to discuss the relative performance of both tools. In particular, given a positive external shock the ratio of credit to GDP decreases, which discourages its use as an indicator variable to activate countercyclical regulation.
    Keywords: banking regulation, minimum capital requirement, countercyclical capital buffer, reserve requirement, (countercyclical or dynamic) loan loss provision, endogenous default, Basel III, DSGE, Uruguay
    JEL: G21 G28
    Date: 2017–08
  6. By: NESTEROV Yurii (CORE, Université catholique de Louvain)
    Abstract: In this paper we develop new tensor methods for unconstrained convex optimization, which solve at each iteration an auxiliary problem of minimizing convex multivariate polynomial. We analyze the simplest scheme, based on minimization of a regularized local model of the objective function, and its accelerated version obtained in the framework of estimating sequences. Their rates of convergence are compared with the worst-case lower complexity bounds for corresponding problem classes. Finally, for the third-order methods, we suggest an efficient technique for solving the auxiliary problem, which is based on the recently developed relative smoothness condition [4, 14]. With this elaboration, the third-order methods become implementable and very fast. The rate of convergence in terms of the function value for the accelerated third-order scheme reaches the level O(1/k^4), where k is the number of iterations. This is very close to the lower bound of the order O(1/k^5), which is also justified in this paper. At the same time, in many important cases the computational cost of one iteration of this method remains on the level typical for the second-order methods.
    Keywords: high-order methods, tensor methods, convex optimization, worst-case complexity bounds, lower complexity bounds
    Date: 2018–03–12
  7. By: David Barrera (CMAP - Centre de Mathématiques Appliquées - Ecole Polytechnique - Polytechnique - X - CNRS - Centre National de la Recherche Scientifique); Stéphane Crépey (LaMME - Laboratoire de Mathématiques et Modélisation d'Evry - INRA - Institut National de la Recherche Agronomique - UEVE - Université d'Évry-Val-d'Essonne - ENSIIE - CNRS - Centre National de la Recherche Scientifique); Babacar Diallo (LaMME - Laboratoire de Mathématiques et Modélisation d'Evry - INRA - Institut National de la Recherche Agronomique - UEVE - Université d'Évry-Val-d'Essonne - ENSIIE - CNRS - Centre National de la Recherche Scientifique); Gersende Fort (IMT - Institut de Mathématiques de Toulouse UMR5219 - CNRS - Centre National de la Recherche Scientifique - INSA Toulouse - Institut National des Sciences Appliquées - Toulouse - PRES Université de Toulouse - UPS - Université Paul Sabatier - Toulouse 3 - UT2 - Université Toulouse 2 - UT1 - Université Toulouse 1 Capitole); Emmanuel Gobet (CMAP - Centre de Mathématiques Appliquées - Ecole Polytechnique - Polytechnique - X - CNRS - Centre National de la Recherche Scientifique); Uladzislau Stazhynski (CMAP - Centre de Mathématiques Appliquées - Ecole Polytechnique - Polytechnique - X - CNRS - Centre National de la Recherche Scientifique)
    Abstract: We consider the problem of the numerical computation of its economic capital by an insurance or a bank, in the form of a value-at-risk or expected shortfall of its loss over a given time horizon. This loss includes the appreciation of the mark-to-model of the liabilities of the firm, which we account for by nested Monte Carlo à la Gordy and Juneja (2010) or by regression à la Broadie, Du, and Moallemi (2015). Using a stochastic approximation point of view on value-at-risk and expected shortfall, we establish the convergence of the resulting economic capital simulation schemes, under mild assumptions that only bear on the theoretical limiting problem at hand, as opposed to assumptions on the approximating problems in Gordy-Juneja (2010) and Broadie-Du-Moallemi (2015). Our economic capital estimates can then be made conditional in a Markov framework and integrated in an outer Monte Carlo simulation to yield the risk margin of the firm, corresponding to a market value margin (MVM) in insurance or to a capital valuation adjustment (KVA) in banking par- lance. This is illustrated numerically by a KVA case study implemented on GPUs.
    Date: 2018–02–15
  8. By: Daniel Ladley (University of Leicester); Peter L. Rousseau (Vanderbilt University)
    Abstract: We assess systematic risk in the U.S. banking system before and after the Panic of 1873 using a combination of linear programming and computational optimization to estimate the interbank network based upon total gross and net positions of national banks a week before the crisis. We impose various liquidity shocks resembling those of 1873, and find the network can capture the distribution of interbank deposits a year later. The network may be used to predict banks likely to panic (i.e., change reserve agent) in the crisis. The identified banks see their balance sheets weaken in the year after the crisis more than other banks. The results shed light on the nature and regional pattern of withdrawals that may have occurred in a classic 19th century U.S. financial crisis.
    Keywords: Panic of 1873, Reserve System, Crisis, Systemic Risk, Network.
    JEL: G2 N1
    Date: 2018–03–23
  9. By: Marcelo Veracierto (Federal Reserve Bank of Chicago)
    Abstract: I consider a real business cycle model in which agents have private information about their stochastic value of leisure. For the case of logarithmic preferences I provide an analytical characterization of the solution to the associated mechanism design problem. Moreover, I show a striking irrelevance result: That the stationary behavior of all aggregate variables are exactly the same in the private information economy as in the full information case. I then introduce a new computational method to show that the irrelevance result holds numerically for more general CRRA preferences.
    Date: 2017
  10. By: Keegan Mendonca; Vasileios E. Kontosakos; Athanasios A. Pantelous; Konstantin M. Zuev
    Abstract: Barrier options are one of the most widely traded exotic options on stock exchanges. They tend to be cheaper than the corresponding vanilla options and better represent investor's beliefs, but have more complicated payoffs. This makes pricing barrier options an important yet non-trivial computational problem. In this paper, we develop a new stochastic simulation method for pricing barrier options and estimating the corresponding execution probabilities. We show that the proposed method always outperforms the standard Monte Carlo approach and becomes substantially more efficient when the underlying asset has high volatility, while it performs better than multilevel Monte Carlo for special cases of barrier options and underlying assets. These theoretical findings are confirmed by numerous simulation results.
    Date: 2018–03
  11. By: Blagica Petreski; Pavle Gacov
    Date: 2018–02
  12. By: Oriol Carreras; E Philip Davis; Ian Hurst; Iana Liadze; Rebecca Piggott; James Warren
    Abstract: In this paper we incorporate a macroprudential policy model within a semi-structural global macroeconomic model, NiGEM. The existing NiGEM model is expanded for the UK, Germany and Italy to include two macroprudential tools: loan-to-value ratios on mortgage lending and variable bank capital adequacy targets. The former has an effect on the economy via its impact on the housing market while the latter acts on the lending spreads of corporate and households. A systemic risk index that tracks the likelihood of the occurrence of a banking crisis is modelled to establish thresholds at which macroprudential policies should be activated by the authorities. We then show counterfactual scenarios, including a historic dynamic simulation of the subprime crisis and the endogenous response of policy thereto, based on the macroprudential block as well as performing a cost-benefit analysis of macroprudential policies. Conclusions are drawn relating to use of this tool for prediction and policy analysis, as well as some of the limitations and potential further research.
    Keywords: macroprudential policy, house prices, credit, systemic risk, macroeconomic modelling
    JEL: E58 G28
    Date: 2018–03
  13. By: Göke, Leonard (RWTH Aachen University); Madlener, Reinhard (E.ON Energy Research Center, Future Energy Consumer Needs and Behavior (FCN))
    Abstract: In most European countries, taxes and levies, the state-induced components of electricity prices, constitute the major share of electricity prices for consumers and are charged at a fixed rate. This study analyzes whether switching stateinduced price components to time varying rates can support the integration of variable renewables (VRE) and, thus, help to efficiently achieve the overarching goal of decarbonizing the energy system. Based on game theory and linear programming, we introduce a novel simulation model of the power market. For a quantitative case study, the model is parametrized to represent a German energy system that meets the political objective to increase the share of renewables (RE) in power generation to 80% in 2050. We find that dynamization supports the integration of VRE into the energy system. Whether dynamization is an efficient instrument to promote decarbonization as well is highly dependent on the policy framework in place.
    Keywords: Dynamization; Climate policy; Variable renewables; Integration costs; Welfare analysis; Energy market model
    JEL: C61 C63 C70 Q42 Q48
    Date: 2017–12
  14. By: Rui Fan (School of Management, Swansea University); Oleksandr Talavera (School of Management, Swansea University); Vu Tran (School of Management, Swansea University)
    Abstract: This study examines whether stock indicators are affected by information in social media such as Twitter. Using a daily sample of tweets with a FTSE 100 firm name over two years, we find insignificant associations between tweets/bot-tweets and stock returns whereas there is a strongly significant association with volatility and trading volume. Using a high-frequency sample, we detect a positive (negative) impact of tweets (bot-tweets) on stock returns. The impact of bot-tweets vanishes within 30 minutes. The results for volatility and trading volume are consistent with the daily data analysis. In addition, event study reveals a bounce-back pattern of price reactions in response to negative retweets. Abnormal increases in tweets/bottweets have significant effects on stock volatility, trading volume and liquidity.
    Keywords: Social media bots, investor sentiments, noise traders, text classification, computational linguistics
    JEL: G12 G14 L86
    Date: 2018–03–23

This nep-cmp issue is ©2018 by Stan Miles. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.