nep-cmp New Economics Papers
on Computational Economics
Issue of 2019‒08‒12
seventeen papers chosen by



  1. A simulation of the insurance industry: The problem of risk model homogeneity By Heinrich, Torsten; Sabuco, Juan; Farmer, J. Doyne
  2. Winter is possibly not coming : mitigating financial instability in an agent-based model with interbank market By Lilit Popoyan; Mauro Napoletano; Andrea Roventini
  3. Rating firms and sensitivity analysis By Magni, Carlo Alberto; Malagoli, Stefano; Marchioni, Andrea; Mastroleo, Giovanni
  4. Deep Learning-Based Least Square Forward-Backward Stochastic Differential Equation Solver for High-Dimensional Derivative Pricing By Jian Liang; Zhe Xu; Peter Li
  5. Predicting criminal behavior with Levy flights using real data from Bogota By Mateo Dulce Rubio
  6. An Early Warning System for banking crises: From regression-based analysis to machine learning techniques By Elizabeth Jane Casabianca; Michele Catalano; Lorenzo Forni; Elena Giarda; Simone Passeri
  7. Do early-ending conditional cash transfer programs crowd out school enrollment? By Martin Wiegand
  8. Using Machine Learning to Detect and Predict Corporate Accounting Fraud (Japanese) By USUKI Teppei; KONDO Satoshi; SHIRAKI Kengo; SUGA Miki; MIYAKAWA Daisuke
  9. Accelerated Share Repurchase and other buyback programs: what neural networks can bring By Olivier Gu\'eant; Iuliia Manziuk; Jiang Pu
  10. The mirror does not lie: Endogenous ?scal limits for Slovakia By Zuzana Mucka
  11. The Effect of the Fresh Fruit and Vegetable Program (FFVP) on Fruit and Vegetable Consumption: An Agent Based Modeling Approach By Schauder, Stephanie A.; Thomsen, Michael R.; Nayga, Rodolfo M.
  12. Artificial Intelligence, Data, Ethics: An Holistic Approach for Risks and Regulation By Alexis Bogroff; Dominique Guegan
  13. Forecasting High-Risk Composite CAMELS Ratings By Lewis Gaul; Jonathan Jones; Pinar Uysal
  14. Evaluating the Effectiveness of Common Technical Trading Models By Joseph Attia
  15. Asset mispricing in loan secondary market By Mustafa Caglayan; Tho Pham; Oleksandr Talavera; Xiong Xiong
  16. How Polarized are Citizens? Measuring Ideology from the Ground-Up By Draca, Mirko; Schwarz, Carlo
  17. On the Statistical Differences between Binary Forecasts and Real World Payoffs By Nassim Nicholas Taleb

  1. By: Heinrich, Torsten; Sabuco, Juan; Farmer, J. Doyne
    Abstract: We develop an agent-based simulation of the catastrophe insurance and reinsurance industry and use it to study the problem of risk model homogeneity. The model simulates the balance sheets of insurance firms, who collect premiums from clients in return for ensuring them against intermittent, heavy-tailed risks. Firms manage their capital and pay dividends to their investors, and use either reinsurance contracts or cat bonds to hedge their tail risk. The model generates plausible time series of profits and losses and recovers stylized facts, such as the insurance cycle and the emergence of asymmetric, long tailed firm size distributions. We use the model to investigate the problem of risk model homogeneity. Under Solvency II, insurance companies are required to use only certified risk models. This has led to a situation in which only a few firms provide risk models, creating a systemic fragility to the errors in these models. We demonstrate that using too few models increases the risk of nonpayment and default while lowering profits for the industry as a whole. The presence of the reinsurance industry ameliorates the problem but does not remove it. Our results suggest that it would be valuable for regulators to incentivize model diversity. The framework we develop here provides a first step toward a simulation model of the insurance industry for testing policies and strategies for better capital management.
    Keywords: insurance; systemic risk; reinsurance; agent-based simulation; risk modeling
    JEL: C63 G22 G28
    Date: 2019–07–12
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:95096&r=all
  2. By: Lilit Popoyan (Laboratory of Economics and Management); Mauro Napoletano (Observatoire français des conjonctures économiques); Andrea Roventini (Observatoire français des conjonctures économiques)
    Abstract: We develop a macroeconomic agent-based model to study how financial instability can emerge from the co-evolution of interbank and credit markets and the policy responses to mitigate its impact on the real economy. The model is populated by heterogenous firms, consumers, and banks that locally interact in dfferent markets. In particular, banks provide credit to firms according to a Basel II or III macro-prudential frameworks and manage their liquidity in the interbank market. The Central Bank performs monetary policy according to dfferent types of Taylor rules. We find that the model endogenously generates market freezes in the interbank market which interact with the financial accelerator possibly leading to firm bankruptcies, banking crises and the emergence of deep downturns. This requires the timely intervention of the Central Bank as a liquidity lender of last resort. Moreover, we find that the joint adoption of a three mandate Taylor rule tackling credit growth and the Basel III macro-prudential frame-work is the best policy mix to stabilize financial and real economic dynamics. However, as the Liquidity Coverage Ratio spurs financial instability by increasing the pro-cyclicality of banks’ liquid reserves, a new counter-cyclical liquidity buffer should be added to Basel III to improve its performance further. Finally, we find that the Central Bank can also dampen financial in- stability by employing a new unconventional monetarypolicy tool involving active management of the interest-rate corridor in the interbank market.
    Keywords: Financial instability; Interbank market freezes; Monetary policy; Macro-prudential policy; Basel III regulation; Tinbergen principle; Agent - based models
    Date: 2019–07
    URL: http://d.repec.org/n?u=RePEc:spo:wpmain:info:hdl:2441/1j4v8sl4fc9a49ankmnhv6bb6a&r=all
  3. By: Magni, Carlo Alberto; Malagoli, Stefano; Marchioni, Andrea; Mastroleo, Giovanni
    Abstract: This paper introduces a model for rating a firm's default risk based on fuzzy logic and expert system and an associated model of sensitivity analysis (SA) for managerial purposes. The rating model automatically replicates the evaluation process of default risk performed by human experts. It makes use of a modular approach based on rules blocks and conditional implications. The SA model investigates the change in the firm's default risk under changes in the model inputs and employs recent results in the engineering literature of Sensitivity Analysis. In particular, it (i) allows the decomposition of the historical variation of default risk, (ii) identifies the most relevant parameters for the risk variation, and (iii) suggests managerial actions to be undertaken for improving the firm's rating.
    Keywords: Credit rating, default risk, fuzzy logic, fuzzy expert system, sensitivity analysis.
    JEL: C63 C67 G32
    Date: 2019–07–21
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:95265&r=all
  4. By: Jian Liang; Zhe Xu; Peter Li
    Abstract: We propose a new forward-backward stochastic differential equation solver for high-dimensional derivatives pricing problems by combining deep learning solver with least square regression technique widely used in the least square Monte Carlo method for the valuation of American options. Our numerical experiments demonstrate the efficiency and accuracy of our least square backward deep neural network solver and its capability to provide accurate prices for complex early exercise derivatives such as callable yield notes. Our method can serve as a generic numerical solver for pricing derivatives across various asset groups, in particular, as an efficient means for pricing high-dimensional derivatives with early exercises features.
    Date: 2019–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1907.10578&r=all
  5. By: Mateo Dulce Rubio
    Abstract: I use residential burglary data from Bogota, Colombia, to fit an agent-based modelfollowing truncated Lévy flights (Pan et al., 2018) elucidating criminal rational behaviorand validating repeat/near-repeat victimization and broken windows effects. The estimatedparameters suggest that if an average house or its neighbors have never been attacked,and it is suddenly burglarized, the probability of a new attack the next day increases, dueto the crime event, in 79 percentage points. Moreover, the following day its neighborswill also face an increment in the probability of crime of 79 percentage points. This effectpersists for a long time span. The model presents an area under the Cumulative AccuracyProfile (CAP) curve, of 0.8 performing similarly or better than state-of-the-art crimeprediction models. Public policies seeking to reduce criminal activity and its negativeconsequences must take into account these mechanisms and the self-exciting nature ofcrime to effectively make criminal hotspots safer
    Keywords: Criminal behavior, Crime prediction model, Machine learning, Agent-basedmodel
    JEL: K42 H39 C53 C63
    Date: 2019–04–30
    URL: http://d.repec.org/n?u=RePEc:col:000508:017347&r=all
  6. By: Elizabeth Jane Casabianca (Prometeia Associazione per le Previsioni Econometriche, and DiSeS, Polytechnic University of Marche); Michele Catalano (Prometeia Associazione per le Previsioni Econometriche); Lorenzo Forni (Prometeia Associazione per le Previsioni Econometriche, and DSEA, University of Padua); Elena Giarda (Prometeia Associazione per le Previsioni Econometriche, and Cefin, University of Modena and Reggio Emilia); Simone Passeri (Prometeia Associazione per le Previsioni Econometriche)
    Abstract: Ten years after the outbreak of the 2007-2008 crisis, renewed attention is directed to money and credit fluctuations, financial crises and policy responses. By using an integrated dataset that includes 100 countries (advanced and emerging) spanning from 1970 to 2017, we propose an Early Warning System (EWS) to predict the build-up of systemic banking crises. The paper aims at (i) identifying the macroeconomic drivers of banking crises, (ii) going beyond the use of traditional discrete choice models by applying supervised machine learning (ML) and (iii) assessing the degree of countries’ exposure to systemic risks by means of predicted probabilities. Our results show that ML algorithms can have a better predictive performance than the logit models. All models deliver increasing predicted probabilities in the last years of the sample for the advanced countries, warning against the possible build-up of pre-crisis macroeconomic imbalances.
    Keywords: banking crises, EWS, machine learning, decision trees, AdaBoost
    JEL: C40 G01 C25 E44 G21
    Date: 2019–08
    URL: http://d.repec.org/n?u=RePEc:pad:wpaper:0235&r=all
  7. By: Martin Wiegand (Vrije Universiteit Amsterdam)
    Abstract: This paper explores how a conditional cash transfer program influences students’ schooling decisions when program payments stop in the middle of the school career. To that end, I examine Mexico’s Progresa, which covered students only until the end of middle school (at age 15) in its early years. The experimental setup permits to study the program’s impact on the probability to continue with high school after middle school. Despite initial randomization, the program itself has likely rendered the respective samples of middle school graduates in the treatment and the control group incomparable. To account for this, I employ a newly developed semiparametric technique that uses a combination of machine learning methods in conjunction with doubly-robust estimation. I find that exposure to Progresa during middle school reduced the probability to transfer to high school by 10 to 14 percentage points. Possible explanations for this effect include parents’ loss aversion, motivation crowding, anchoring, and classroom peer effects.
    Keywords: education, conditional cash transfer, Progresa, machine learning, doubly-robust estimation, loss aversion, motivation crowding, anchoring, classroom peer effects, Mexico
    JEL: I22 I25 O15 J24 D04 D91 C52
    Date: 2019–07–31
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20190053&r=all
  8. By: USUKI Teppei; KONDO Satoshi; SHIRAKI Kengo; SUGA Miki; MIYAKAWA Daisuke
    Abstract: In this paper, we examine to what extent the employment of machine learning technique contributes to better detection and prediction of corporate (i.e., firm-level) accounting fraud. The obtained results show, first, that the capacity to detect accounting fraud increases substantially by using the machine learning-based model. Second, a similar improvement in predictive power is also confirmed. Such higher performance is due to both the employment of the machine learning technique and the higher dimensions of predictors. Third, we also confirm that a larger variety of data, such as corporate governance-related variables, which have not necessarily been used as main predictors in the extant studies, contribute to better detection and prediction to some extent. These results jointly suggest the existence of various unexploited information sources which are potentially useful for the detection and prediction of corporate accounting fraud.
    Date: 2019–07
    URL: http://d.repec.org/n?u=RePEc:eti:rdpsjp:19039&r=all
  9. By: Olivier Gu\'eant; Iuliia Manziuk; Jiang Pu
    Abstract: When firms want to buy back their own shares, they have a choice between several alternatives. If they often carry out open market repurchase, they also increasingly rely on banks through complex buyback contracts involving option components, e.g. accelerated share repurchase contracts, VWAP-minus profit-sharing contracts, etc. The entanglement between the execution problem and the option hedging problem makes the management of these contracts a difficult task that should not boil down to simple Greek-based risk hedging, contrary to what happens with classical books of options. In this paper, we propose a machine learning method to optimally manage several types of buyback contracts. In particular, we recover strategies similar to those obtained in the literature with partial differential equation and recombinant tree methods and show that our new method, which does not suffer from the curse of dimensionality, enables to address types of contract that could not be addressed with grid or tree methods.
    Date: 2019–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1907.09753&r=all
  10. By: Zuzana Mucka (Council for Budget Responsibility)
    Abstract: We study the interactions among ?scal policy, ?scal limits and the associated sovereign risk premium. The ?scal limit distribution, which measures the ability of the government to service its debt, arises endogenously from dynamic Laffer curves. We assume a feedback loop between the ?scal limit distribution and the risk premium and determine them simultaneously using and ef?cient iterative scheme. A nonlinear relationship between the sovereign risk premium and the level of government debt then emerges in equilibrium. The model is calibrated to Slovak data assuming steeply growing age-related transfers and volatile business cycle. We study the impact of various model parameters on the conditional (state-dependent) and unconditional distributions of the ?scal limit. Fiscal limit distributions obtained via Markov–Chain–Monte–Carlo regime switching algorithm depend on the rate of growth of government transfers, the degree of countercyclicality of policy, and the distribution of the underlying economic conditions. We ?nd that both distributions are considerably more heavy-tailed compared with those usually obtained in the literature for advanced economies, and are very sensitive to the size and rate of growth of transfers, the business cycle phase and the ?scal policy credibility. The main policy message is that the Maastricht debt limit of 60 percent of GDP is not safe enough for Slovakia. Furthermore, credible reforms reining in age-related spending and thus stabilising public ?nance in the long-run, should be a priority.
    Keywords: Simulation Methods and Modelling, Fiscal Policy, Government Expenditures, Debt Management and Sovereign Debt
    JEL: C15 C63 E62 H5 H63
    Date: 2019–05
    URL: http://d.repec.org/n?u=RePEc:cbe:wpaper:201902&r=all
  11. By: Schauder, Stephanie A.; Thomsen, Michael R.; Nayga, Rodolfo M.
    Keywords: Food Consumption/Nutrition/Food Safety
    Date: 2019–06–25
    URL: http://d.repec.org/n?u=RePEc:ags:aaea19:290942&r=all
  12. By: Alexis Bogroff (UP1 - Université Panthéon-Sorbonne); Dominique Guegan (UP1 - Université Panthéon-Sorbonne, CES - Centre d'économie de la Sorbonne - UP1 - Université Panthéon-Sorbonne - CNRS - Centre National de la Recherche Scientifique, Labex ReFi - UP1 - Université Panthéon-Sorbonne, University of Ca’ Foscari [Venice, Italy])
    Abstract: An extensive list of risks relative to big data frameworks and their use through models of artificial intelligence is provided along with measurements and implementable solutions. Bias, interpretability and ethics are studied in depth, with several interpretations from the point of view of developers, companies and regulators. Reflexions suggest that fragmented frameworks increase the risks of models misspecification, opacity and bias in the result; Domain experts and statisticians need to be involved in the whole process as the business objective must drive each decision from the data extraction step to the final activatable prediction. We propose an holistic and original approach to take into account the risks encountered all along the implementation of systems using artificial intelligence from the choice of the data and the selection of the algorithm, to the decision making.
    Keywords: Artificial Intelligence,Bias,Big Data,Ethics,Governance,Interpretability,Regulation,Risk
    Date: 2019–06
    URL: http://d.repec.org/n?u=RePEc:hal:cesptp:halshs-02181597&r=all
  13. By: Lewis Gaul; Jonathan Jones; Pinar Uysal
    Keywords: Bank supervision and regulation, early warning models, CAMELS ratings, machine learning
    JEL: G21 G28 C53
    Date: 2019–07–23
    URL: http://d.repec.org/n?u=RePEc:fip:fedgif:1252&r=all
  14. By: Joseph Attia
    Abstract: How effective are the most common trading models? The answer may help investors realize upsides to using each model, act as a segue for investors into more complex financial analysis and machine learning, and to increase financial literacy amongst students. Creating original versions of popular models, like linear regression, K-Nearest Neighbor, and moving average crossovers, we can test how each model performs on the most popular stocks and largest indexes. With the results for each, we can compare the models, and understand which model reliably increases performance. The trials showed that while all three models reduced losses on stocks with strong overall downward trends, the two machine learning models did not work as well to increase profits. Moving averages crossovers outperformed a continuous investment every time, although did result in a more volatile investment as well. Furthermore, once finished creating the program that implements moving average crossover, what are the optimal periods to use? A massive test consisting of 169,880 trials, showed the best periods to use to increase investment performance (5,10) and to decrease volatility (33,44). In addition, the data showed numerous trends such as a smaller short SMA period is accompanied by higher performance. Plotting volatility against performance shows that the high risk, high reward saying holds true and shows that for investments, as the volatility increases so does its performance.
    Date: 2019–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1907.10407&r=all
  15. By: Mustafa Caglayan (Heriot-Watt University); Tho Pham (University of Reading); Oleksandr Talavera (University of Birmingham); Xiong Xiong (Tianjin University)
    Abstract: This study examines the presence of mispricing in Bondora, a leading European peer-to-peer lending platform, over the 2016-2019 period. Implementing machine-learning methods, we calculate the likelihood of success for loan resale in Bondora secondary market and compare with ex-post outcomes. We find evidence of mispricing mainly driven by the differences in market participants’ perceptions about asset values: low-quality assets are successfully sold while high-quality assets are not. Once sellers discover buyers’ beliefs about asset prices, they revalue their assets according to buyers’ perception to exploit this mismatch in subsequent listings. Our results are robust to various statistical and machine learning methods.
    Keywords: mispricing, online secondary market, peer-to-peer lending, belief dispersion
    JEL: G12 G20
    Date: 2019–07
    URL: http://d.repec.org/n?u=RePEc:bir:birmec:19-07&r=all
  16. By: Draca, Mirko (University of Warwick and Centre for Economic Performance, LSE); Schwarz, Carlo (University of Warwick and Centre for Competitive Advantage in the Global Economy (CAGE))
    Abstract: Strong evidence has been emerging that major democracies have become more politically polarized, at least according to measures based on the ideological positions of political elites. We ask: have the general public (‘citizens’) followed the same pattern? Our approach is based on unsupervised machine learning models as applied to issueposition survey data. This approach firstly indicates that coherent, latent ideologies are strongly apparent in the data, with a number of major, stable types that we label as: Liberal Centrist, Conservative Centrist, Left Anarchist and Right Anarchist. Using this framework, and a resulting measure of ‘citizen slant’, we are then able to decompose the shift in ideological positions across the population over time. Specifically, we find evidence of a ‘disappearing center’ in a range of countries with citizens shifting away from centrist ideologies into anti-establishment ‘anarchist’ ideologies over time. This trend is especially pronounced for the US.
    Keywords: Polarization ; Ideology ; Unsupervised Learning
    JEL: D72 C81
    Date: 2019
    URL: http://d.repec.org/n?u=RePEc:wrk:warwec:1218&r=all
  17. By: Nassim Nicholas Taleb
    Abstract: What do binary (or probabilistic) forecasting abilities have to do with overall performance? We map the difference between (univariate) binary predictions, bets and "beliefs" (expressed as a specific "event" will happen/will not happen) and real-world continuous payoffs (numerical benefits or harm from an event) and show the effect of their conflation and mischaracterization in the decision-science literature. We also examine the differences under thin and fat tails. The effects are: A- Spuriousness of many psychological results particularly those documenting that humans overestimate tail probabilities and rare events, or that they overreact to fears of market crashes, ecological calamities, etc. Many perceived "biases" are just mischaracterizations by psychologists. There is also a misuse of Hayekian arguments in promoting prediction markets. We quantify such conflations with a metric for "pseudo-overestimation". B- Being a "good forecaster" in binary space doesn't lead to having a good actual performance}, and vice versa, especially under nonlinearities. A binary forecasting record is likely to be a reverse indicator under some classes of distributions. Deeper uncertainty or more complicated and realistic probability distribution worsen the conflation . C- Machine Learning: Some nonlinear payoff functions, while not lending themselves to verbalistic expressions and "forecasts", are well captured by ML or expressed in option contracts. D- Fattailedness: The difference is exacerbated in the power law classes of probability distributions.
    Date: 2019–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1907.11162&r=all

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.