|
on Computational Economics |
Issue of 2019‒07‒08
nine papers chosen by |
By: | Laura Carvalho; Corrado Di Guilmi |
Abstract: | The paper presents a stock-flow consistent agent-based model with effective demand, endogenous credit creation, and labor-saving technological progress. The aim is to study the joint dynamics of both personal and functional distribution of income as a result of technological unemployment, together with the effect on household debt. Numerical simulations show the potentially destabilizing effect of technological unemployment and reveal that an increase in the profit share of income amplifies the negative effect of income inequality on business cycle and growth. The sensitivity analysis provides indications on the effectiveness of possible mixes of fiscal and redistributive policies, but also demonstrates that the effectiveness of policy measures is strongly dependent on behavioral and institutional factors. |
Keywords: | stock-flow agent-based consistent model; income inequality; functional distribution; technological unemployment; social imitation |
JEL: | C63 D31 E21 E25 |
Date: | 2019–02–05 |
URL: | http://d.repec.org/n?u=RePEc:spa:wpaper:2019wpecon04&r=all |
By: | Richard Heuver; Ron TriepelsTriepels |
Abstract: | Liquidity stress constitutes an ongoing threat to financial stability in the banking sector. A bank that manages its liquidity inadequately might find itself unable to meet its payment obligations. These liquidity issues, in turn, can negatively impact the liquidity position of many other banks due to contagion effects. For this reason, central banks carefully monitor the payment activities of banks in financial market infrastructures and try to detect early-warning signs of liquidity stress. In this paper, we investigate whether this monitoring task can be performed by supervised machine learning. We construct probabilistic classifiers that estimate the probability that a bank faces liquidity stress. The classifiers are trained on a dataset consisting of various payment features of European banks and which spans several known stress events. Our experimental results show that the classifiers detect the periods in which the banks faced liquidity stress reasonably well. |
Keywords: | Risk Monitoring; Liquidity Stress; Neural Networks; Financial Market Infrastructures; Large-Value Payment Systems |
JEL: | G32 G33 C45 E42 |
Date: | 2019–06 |
URL: | http://d.repec.org/n?u=RePEc:dnb:dnbwpp:642&r=all |
By: | Tucker Hybinette Balch; Mahmoud Mahfouz; Joshua Lockhart; Maria Hybinette; David Byrd |
Abstract: | We show how a multi-agent simulator can support two important but distinct methods for assessing a trading strategy: Market Replay and Interactive Agent-Based Simulation (IABS). Our solution is important because each method offers strengths and weaknesses that expose or conceal flaws in the subject strategy. A key weakness of Market Replay is that the simulated market does not substantially adapt to or respond to the presence of the experimental strategy. IABS methods provide an artificial market for the experimental strategy using a population of background trading agents. Because the background agents attend to market conditions and current price as part of their strategy, the overall market is responsive to the presence of the experimental strategy. Even so, IABS methods have their own weaknesses, primarily that it is unclear if the market environment they provide is realistic. We describe our approach in detail, and illustrate its use in an example application: The evaluation of market impact for various size orders. |
Date: | 2019–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1906.12010&r=all |
By: | William A. Brock; J. Isaac Miller (Department of Economics, University of Missouri) |
Abstract: | Assessments of decreases in economic damages from climate change mitigation typically rely on climate output from computationally expensive precomputed runs of general circulation models (GCMs) under a handful of scenarios with discretely varying targets, such as the four representative concentration pathways (RCPs) for CO2 and other anthropogenically emitted gases. Although such analyses are extremely valuable in informing scientists and policymakers about specific, well-known, and massive mitigation goals, we add to the literature by considering potential outcomes from more modest policy changes that may not be represented by any concentration pathway or GCM output. We construct computationally efficient Quasi-representative Concentration Pathways (QCPs) in order to leverage existing scenarios featuring plausible concentration pathways. Computational efficiency allows for common statistical methods for assessing model uncertainty based on iterative replication, such as bootstrapping. We illustrate by feeding two QCPs through a computationally efficient statistical emulator and dose response functions extrapolated from estimates in the recent literature in order to gauge effects of mitigation on the relative risk of heat stress mortality. |
Keywords: | representative concentration pathways, statistical emulation, climate change mitigation, heat stress mortality |
JEL: | C14 C33 C63 Q54 |
Date: | 2019 |
URL: | http://d.repec.org/n?u=RePEc:umc:wpaper:1904&r=all |
By: | Christian Bongiorno (MICS - Mathématiques et Informatique pour la Complexité et les Systèmes - CentraleSupélec); Salvatore Miccichè (Dipartimento di Fisica e Chimica Università degli Studi di Palermo, Palermo, Italy); Rosario Mantegna (Dipartimento di Fisica e Chimica Università degli Studi di Palermo, Palermo, Italy, CSHV - Complexity Science Hub Vienna, UCL-CS - Computer science department [University College London] - UCL - University College of London [London]) |
Abstract: | We develop a greedy algorithm that is fast and scalable in the detection of a nested partition extracted from a dendrogram obtained from hierarchical clustering of a multivariate series. Our algorithm provides a p-value for each clade observed in the hierarchical tree. The p-value is obtained by computing a number of bootstrap replicas of the dissimilarity matrix and by performing a statistical test on each difference between the dissimilarity associated with a given clade and the dissimilarity of the clade of its parent node. We prove the efficacy of our algorithm with a set of benchmarks generated by using a hierarchical factor model. We compare the results obtained by our algorithm with those of Pvclust. Pvclust is a widely used algorithm developed with a global approach originally motivated by phylogenetic studies. In our numerical experiments we focus on the role of multiple hypothesis test correction and on the robustness of the algorithms to inaccuracy and errors of datasets. We also apply our algorithm to a reference empirical dataset. We verify that our algorithm is much faster than Pvclust algorithm and has a better scalability both in the number of elements and in the number of records of the investigated multivariate set. Our algorithm provides a hierarchically nested partition in much shorter time than currently widely used algorithms allowing to perform a statistically validated cluster analysis detection in very large systems. |
Date: | 2019–06–17 |
URL: | http://d.repec.org/n?u=RePEc:hal:wpaper:hal-02157744&r=all |
By: | Tilman Graff |
Abstract: | I assess the efficiency of transport networks for every country in Africa. Using rich spatial data, I simulate trade flows over more than 70,000 links covering the entire continent. I maximise over the space of networks and find the optimal road system for every African state. My simulations predict that Africa would gain 1.1% of total welfare from better organising its national road systems. I then construct a novel dataset of local network inefficiency and I find that colonial infrastructure projects significantly skew trade networks towards a sub-optimal equilibrium. I also find evidence for regional favouritism and inefficient aid provision. |
JEL: | F1 O18 R4 |
Date: | 2019–06 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:25951&r=all |
By: | Joshua C. C. Chan; Liana Jacobi; Dan Zhu |
Abstract: | The marginal likelihood is the gold standard for Bayesian model comparison although it is well-known that the value of marginal likelihood could be sensitive to the choice of prior hyperparameters. Most models require computationally intense simulation-based methods to evaluate the typically high-dimensional integral of the marginal likelihood expression. Hence, despite the recognition that prior sensitivity analysis is important in this context, it is rarely done in practice. In this paper we develop efficient and feasible methods to compute the sensitivities of marginal likelihood, obtained via two common simulation-based methods, with respect to any prior hyperparameter alongside the MCMC estimation algorithm. Our approach builds on Automatic Differentiation (AD), which has only recently been introduced to the more computationally intensive setting of Markov chain Monte Carlo simulation. We illustrate our approach with two empirical applications in the context of widely used multivariate time series models. |
Keywords: | automatic differentiation, model comparison, vector autoregression, factor models |
JEL: | C11 C53 E37 |
Date: | 2019–06 |
URL: | http://d.repec.org/n?u=RePEc:een:camaaa:2019-45&r=all |
By: | Rui Marvão Pereira; Alfredo Marvão Pereira |
Abstract: | Renewable energy production subsidies alleviate the pressure on electricity prices associated with carbon and energy pricing policies in the process of decarbonization and electrification of the Portuguese economy. Our simulation results show that a feed in tariffs financed by a carbon tax leads to adverse macroeconomic as well as adverse and regressive distributional welfare effects. On the flip side, however, we show that use of the carbon tax revenues to finance a feed in tariff is an improvement over the simple carbon tax case along all the relevant policy dimensions. The feed in tariff mechanism when added to the carbon tax leads to better environmental outcomes at lower costs both in terms of the economic and social justice implications. The policy implications are clear. First, because of its adverse economic and distributional effects a carbon tax should not be used in isolation. The use of the revenues to finance a feed in tariff dominates the simple carbon tax case in all dimensions. Second, the search for the appropriate recycling mechanisms in addition to feed in tariffs is an issue as relevant as the carbon tax itself as it pertains to the potential reversal of the adverse effects of such a tax. |
Keywords: | Dynamic General Equilibrium, Renewable Energy, Feed-in Tariff, Carbon Taxation, Macroeconomic Effects, Distributional Effects, Environmental Effects, Portugal |
JEL: | C68 E62 H23 Q43 Q48 |
Date: | 2019–06 |
URL: | http://d.repec.org/n?u=RePEc:mde:wpaper:0123&r=all |
By: | Monnickendam, Giles; de Asmundis, Carlo |
Abstract: | Background: Economic evaluations include estimates of surgical procedures costs that have usually been derived by allocating operating room (OR) costs in proportion to the average duration of different procedure types. However, ORs run with average utilisation below 100%, due to idle time between procedures and at the end of the day. Longer and less predictable procedures generate greater OR idle time, for a given tolerance of schedule over-runs. Estimates of surgical procedure costs that are based on average procedure duration alone as a measure of OR resource consumption will not capture the impact of the length and variability of procedure duration on OR idle time and capacity utilisation. Objective: To demonstrate how real-world OR scheduling practices lead to different levels of resource consumption than predicted by simple micro-costing approaches based on average procedure duration, and how those differences can vary between procedures with significantly different distributions of duration. Methods: We use a discrete event simulation model, calibrated with real-world data from a single surgical centre in Belgium, to compare simulated resource consumption, including idle time, for two alternative surgical procedures for ablation for atrial fibrillation. Results: We demonstrate that simple micro-costing approaches can under-estimate effective resource consumption between 31% and 48% for a procedure with long and unpredictable duration. For a shorter and more predictable procedure the under-estimate is only 15%. Conclusion: Simple approaches to estimating procedure costs may under-estimate resource consumption and do so in a way that is biased against technologies with shorter and more predictable procedure duration. For health technology decisions where a substantial part of costs are OR resources, a more sophisticated approach, taking account of the real-world implications of the distribution of procedure durations, should be used to avoid potential bias |
Keywords: | Economic evaluation; Capacity utilisation; Discrete event simulation; Operating room scheduling; Micro-costing; Atrial fibrillation |
JEL: | J50 |
Date: | 2018–03–01 |
URL: | http://d.repec.org/n?u=RePEc:ehl:lserod:86483&r=all |