nep-cmp New Economics Papers
on Computational Economics
Issue of 2022‒10‒17
24 papers chosen by

  1. Forecasting World Trade Using Big Data and Machine Learning Techniques By Andrei Dubovik; Adam Elbourne; Bram Hendriks; Mark Kattenberg
  2. Artificial Intelligence Models and Employee Lifecycle Management: A Systematic Literature Review By Saeed Nosratabadi; Roya Khayer Zahed; Vadim Vitalievich Ponkratov; Evgeniy Vyacheslavovich Kostyrin
  3. Housing Boom and Headline Inflation: Insights from Machine Learning By Mr. Yunhui Zhao; Yang Liu; Di Yang
  4. Tree-Based Learning in RNNs for Power Consumption Forecasting By Roberto Baviera; Pietro Manzoni
  5. Weak Supervision in Analysis of News: Application to Economic Policy Uncertainty By Paul Trust; Ahmed Zahran; Rosane Minghim
  6. Computing XVA for American basket derivatives by Machine Learning techniques By Ludovic Goudenege; Andrea Molent; Antonino Zanette
  7. The boosted HP filter is more general than you might think By Ziwei Mei; Peter C. B. Phillips; Zhentao Shi
  8. RESHAPE: Explaining Accounting Anomalies in Financial Statement Audits by enhancing SHapley Additive exPlanations By Ricardo M\"uller; Marco Schreyer; Timur Sattarov; Damian Borth
  9. What does machine learning say about the drivers of inflation? By Emanuel Kohlscheen
  10. Understanding and Predicting Systemic Corporate Distress: A Machine-Learning Approach By Ms. Burcu Hacibedel; Ritong Qu
  11. Two-stage Modeling for Prediction with Confidence By Dangxing Chen
  12. Predicting Performances of Mutual Funds using Deep Learning and Ensemble Techniques By Nghia Chu; Binh Dao; Nga Pham; Huy Nguyen; Hien Tran
  13. Artificial Intelligence, Surveillance, and Big Data By David Karpa; Torben Klarl; Michael Rochlitz
  14. Learning Value-at-Risk and Expected Shortfall By D Barrera; S Cr\'epey; E Gobet; Hoang-Dung Nguyen; B Saadeddine
  15. Interpreting and predicting the economy flows: A time-varying parameter global vector autoregressive integrated the machine learning model By Yukang Jiang; Xueqin Wang; Zhixi Xiong; Haisheng Yang; Ting Tian
  16. Smiles in Profiles: Improving Fairness and Efficiency Using Estimates of User Preferences in Online Marketplaces By Susan Athey; Dean Karlan; Emil Palikot; Yuan Yuan
  17. An Attention Free Long Short-Term Memory for Time Series Forecasting By Hugo Inzirillo; Ludovic De Villelongue
  18. Model-based gym environments for limit order book trading By Joseph Jerome; Leandro Sanchez-Betancourt; Rahul Savani; Martin Herdegen
  19. Dynamic Early Warning and Action Model By Hannes Mueller; Christopher Rauh; Alessandro Ruggieri
  20. Decisions and Performance Under Bounded Rationality: A Computational Benchmarking Approach By Zegners, Dainis; Sunde, Uwe; Strittmatter, Anthony
  21. Income inequality and redistribution in Lithuania: The role of policy, labor market, income, and demographics By Nerijus Cerniauskas; Denisa Sologon; Cathal O'Donoghue; Linas Tarasonis
  22. A Dynamic Stochastic Block Model for Multi-Layer Networks By Ovielt Baltodano L\'opez; Roberto Casarin
  23. Uncertainty analysis of contagion processes based on a functional approach By Zonghui Yao; Dunia López-Pintado; Sara López-Pintado
  24. Labour supply responses to income tax changes in Spain. By Antonio Cutanda; Juan A. Sanchis

  1. By: Andrei Dubovik (CPB Netherlands Bureau for Economic Policy Analysis); Adam Elbourne (CPB Netherlands Bureau for Economic Policy Analysis); Bram Hendriks (CPB Netherlands Bureau for Economic Policy Analysis); Mark Kattenberg (CPB Netherlands Bureau for Economic Policy Analysis)
    Abstract: We compare machine learning techniques to a large Bayesian VAR for nowcasting and forecasting world merchandise trade. We focus on how the predictive performance of the machine learning models changes when they have access to a big dataset with 11,017 data series on key economic indicators. The machine learning techniques used include lasso, random forest and linear ensembles. We additionally compare the accuracy of the forecasts during and outside the Great Financial Crisis. We find no statistically significant differences in forecasting accuracy whether with respect to the technique, the dataset used - small or big - or the time period.
    JEL: F17 C53 C55
    Date: 2022–10
  2. By: Saeed Nosratabadi; Roya Khayer Zahed; Vadim Vitalievich Ponkratov; Evgeniy Vyacheslavovich Kostyrin
    Abstract: Background/Purpose: The use of artificial intelligence (AI) models for data-driven decision-making in different stages of employee lifecycle (EL) management is increasing. However, there is no comprehensive study that addresses contributions of AI in EL management. Therefore, the main goal of this study was to address this theoretical gap and determine the contribution of AI models to EL. Methods: This study applied the PRISMA method, a systematic literature review model, to ensure that the maximum number of publications related to the subject can be accessed. The output of the PRISMA model led to the identification of 23 related articles, and the findings of this study were presented based on the analysis of these articles. Results: The findings revealed that AL algorithms were used in all stages of EL management (i.e., recruitment, on-boarding, employability and benefits, retention, and off-boarding). It was also disclosed that Random Forest, Support Vector Machines, Adaptive Boosting, Decision Tree, and Artificial Neural Network algorithms outperform other algorithms and were the most used in the literature. Conclusion: Although the use of AI models in solving EL problems is increasing, research on this topic is still in its infancy stage, and more research on this topic is necessary.
    Date: 2022–09
  3. By: Mr. Yunhui Zhao; Yang Liu; Di Yang
    Abstract: Inflation has been rising during the pandemic against supply chain disruptions and a multi-year boom in global owner-occupied house prices. We present some stylized facts pointing to house prices as a leading indicator of headline inflation in the U.S. and eight other major economies with fast-rising house prices. We then apply machine learning methods to forecast inflation in two housing components (rent and owner-occupied housing cost) of the headline inflation and draw tentative inferences about inflationary impact. Our results suggest that for most of these countries, the housing components could have a relatively large and sustained contribution to headline inflation, as inflation is just starting to reflect the higher house prices. Methodologically, for the vast majority of countries we analyze, machine-learning models outperform the VAR model, suggesting some potential value for incorporating such models into inflation forecasting.
    Keywords: Housing Price Inflation; Rent; Owner-Occupied Housing; Machine Learning; Forecast; machine-learning model; machine learning method; housing boom; D. forecasting result; Inflation; Housing prices; Housing; Consumer price indexes; Global; Europe; Australia and New Zealand; North America; Caribbean;VAR model
    Date: 2022–07–28
  4. By: Roberto Baviera; Pietro Manzoni
    Abstract: A Recurrent Neural Network that operates on several time lags, called an RNN(p), is the natural generalization of an Autoregressive ARX(p) model. It is a powerful forecasting tool when different time scales can influence a given phenomenon, as it happens in the energy sector where hourly, daily, weekly and yearly interactions coexist. The cost-effective BPTT is the industry standard as learning algorithm for RNNs. We prove that, when training RNN(p) models, other learning algorithms turn out to be much more efficient in terms of both time and space complexity. We also introduce a new learning algorithm, the Tree Recombined Recurrent Learning, that leverages on a tree representation of the unrolled network and appears to be even more effective. We present an application of RNN(p) models for power consumption forecasting on the hourly scale: experimental results demonstrate the efficiency of the proposed algorithm and the excellent predictive accuracy achieved by the selected model both in point and in probabilistic forecasting of the energy consumption.
    Date: 2022–09
  5. By: Paul Trust; Ahmed Zahran; Rosane Minghim
    Abstract: The need for timely data analysis for economic decisions has prompted most economists and policy makers to search for non-traditional supplementary sources of data. In that context, text data is being explored to enrich traditional data sources because it is easy to collect and highly abundant. Our work focuses on studying the potential of textual data, in particular news pieces, for measuring economic policy uncertainty (EPU). Economic policy uncertainty is defined as the public's inability to predict the outcomes of their decisions under new policies and future economic fundamentals. Quantifying EPU is of great importance to policy makers, economists, and investors since it influences their expectations about the future economic fundamentals with an impact on their policy, investment and saving decisions. Most of the previous work using news articles for measuring EPU are either manual or based on a simple keyword search. Our work proposes a machine learning based solution involving weak supervision to classify news articles with regards to economic policy uncertainty. Weak supervision is shown to be an efficient machine learning paradigm for applying machine learning models in low resource settings with no or scarce training sets, leveraging domain knowledge and heuristics. We further generated a weak supervision based EPU index that we used to conduct extensive econometric analysis along with the Irish macroeconomic indicators to validate whether our generated index foreshadows weaker macroeconomic performance
    Date: 2022–08
  6. By: Ludovic Goudenege; Andrea Molent; Antonino Zanette
    Abstract: Total value adjustment (XVA) is the change in value to be added to the price of a derivative to account for the bilateral default risk and the funding costs. In this paper, we compute such a premium for American basket derivatives whose payoff depends on multiple underlyings. In particular, in our model, those underlying are supposed to follow the multidimensional Black-Scholes stochastic model. In order to determine the XVA, we follow the approach introduced by Burgard and Kjaer \cite{burgard2010pde} and afterward applied by Arregui et al. \cite{arregui2017pde,arregui2019monte} for the one-dimensional American derivatives. The evaluation of the XVA for basket derivatives is particularly challenging as the presence of several underlings leads to a high-dimensional control problem. We tackle such an obstacle by resorting to Gaussian Process Regression, a machine learning technique that allows one to address the curse of dimensionality effectively. Moreover, the use of numerical techniques, such as control variates, turns out to be a powerful tool to improve the accuracy of the proposed methods. The paper includes the results of several numerical experiments that confirm the goodness of the proposed methodologies.
    Date: 2022–09
  7. By: Ziwei Mei; Peter C. B. Phillips; Zhentao Shi
    Abstract: The global financial crisis and Covid recession have renewed discussion concerning trend-cycle discovery in macroeconomic data, and boosting has recently upgraded the popular HP filter to a modern machine learning device suited to data-rich and rapid computational environments. This paper sheds light on its versatility in trend-cycle determination, explaining in a simple manner both HP filter smoothing and the consistency delivered by boosting for general trend detection. Applied to a universe of time series in FRED databases, boosting outperforms other methods in timely capturing downturns at crises and recoveries that follow. With its wide applicability the boosted HP filter is a useful automated machine learning addition to the macroeconometric toolkit.
    Date: 2022–09
  8. By: Ricardo M\"uller; Marco Schreyer; Timur Sattarov; Damian Borth
    Abstract: Detecting accounting anomalies is a recurrent challenge in financial statement audits. Recently, novel methods derived from Deep-Learning (DL) have been proposed to audit the large volumes of a statement's underlying accounting records. However, due to their vast number of parameters, such models exhibit the drawback of being inherently opaque. At the same time, the concealing of a model's inner workings often hinders its real-world application. This observation holds particularly true in financial audits since auditors must reasonably explain and justify their audit decisions. Nowadays, various Explainable AI (XAI) techniques have been proposed to address this challenge, e.g., SHapley Additive exPlanations (SHAP). However, in unsupervised DL as often applied in financial audits, these methods explain the model output at the level of encoded variables. As a result, the explanations of Autoencoder Neural Networks (AENNs) are often hard to comprehend by human auditors. To mitigate this drawback, we propose (RESHAPE), which explains the model output on an aggregated attribute-level. In addition, we introduce an evaluation framework to compare the versatility of XAI methods in auditing. Our experimental results show empirical evidence that RESHAPE results in versatile explanations compared to state-of-the-art baselines. We envision such attribute-level explanations as a necessary next step in the adoption of unsupervised DL techniques in financial auditing.
    Date: 2022–09
  9. By: Emanuel Kohlscheen
    Abstract: This paper examines the drivers of CPI inflation through the lens of a simple, but computationally intensive machine learning technique. More specifically, it predicts inflation across 20 advanced countries between 2000 and 2021, relying on 1,000 regression trees that are constructed based on six key macroeconomic variables. This agnostic, purely data driven method delivers (relatively) good outcome prediction performance. Out of sample root mean square errors (RMSE) systematically beat even the in-sample benchmark econometric models. Partial effects of inflation expectations on CPI outcomes are also elicited in the paper. Overall, the results highlight the role of expectations for inflation outcomes in advanced economies, even though their importance appears to have declined somewhat during the last 10 years.
    Date: 2022–08
  10. By: Ms. Burcu Hacibedel; Ritong Qu
    Abstract: In this paper, we study systemic non-financial corporate sector distress using firm-level probabilities of default (PD), covering 55 economies, and spanning the last three decades. Systemic corporate distress is identified by elevated PDs across a large portion of the firms in an economy. A machine-learning based early warning system is constructed to predict the onset of distress in one year’s time. Our results show that credit expansion, monetary policy tightening, overvalued stock prices, and debt-linked balance-sheet weaknesses predict corporate distress. We also find that systemic corporate distress events are associated with contractions in GDP and credit growth in advanced and emerging markets at different degrees and milder than financial crises.
    Keywords: Nonfinancial sector; Probability of default; Early warning systems; Macroprudential policy; balance-sheet weakness; appendix B constructing predictor; distress events; appendix C machine learning model; PD indices; Corporate sector; Banking crises; Credit; Financial statements; Global
    Date: 2022–07–29
  11. By: Dangxing Chen
    Abstract: The use of neural networks has been very successful in a wide variety of applications. However, it has recently been observed that it is difficult to generalize the performance of neural networks under the condition of distributional shift. Several efforts have been made to identify potential out-of-distribution inputs. Although existing literature has made significant progress with regard to images and textual data, finance has been overlooked. The aim of this paper is to investigate the distribution shift in the credit scoring problem, one of the most important applications of finance. For the potential distribution shift problem, we propose a novel two-stage model. Using the out-of-distribution detection method, data is first separated into confident and unconfident sets. As a second step, we utilize the domain knowledge with a mean-variance optimization in order to provide reliable bounds for unconfident samples. Using empirical results, we demonstrate that our model offers reliable predictions for the vast majority of datasets. It is only a small portion of the dataset that is inherently difficult to judge, and we leave them to the judgment of human beings. Based on the two-stage model, highly confident predictions have been made and potential risks associated with the model have been significantly reduced.
    Date: 2022–09
  12. By: Nghia Chu; Binh Dao; Nga Pham; Huy Nguyen; Hien Tran
    Abstract: Predicting fund performance is beneficial to both investors and fund managers, and yet is a challenging task. In this paper, we have tested whether deep learning models can predict fund performance more accurately than traditional statistical techniques. Fund performance is typically evaluated by the Sharpe ratio, which represents the risk-adjusted performance to ensure meaningful comparability across funds. We calculated the annualised Sharpe ratios based on the monthly returns time series data for more than 600 open-end mutual funds investing in listed large-cap equities in the United States. We find that long short-term memory (LSTM) and gated recurrent units (GRUs) deep learning methods, both trained with modern Bayesian optimization, provide higher accuracy in forecasting funds' Sharpe ratios than traditional statistical ones. An ensemble method, which combines forecasts from LSTM and GRUs, achieves the best performance of all models. There is evidence to say that deep learning and ensembling offer promising solutions in addressing the challenge of fund performance forecasting.
    Date: 2022–09
  13. By: David Karpa; Torben Klarl; Michael Rochlitz
    Abstract: The most important resource to improve technologies in the field of artificial intelligence is data. Two types of policies are crucial in this respect: privacy and data-sharing regulations, and the use of surveillance technologies for policing. Both types of policies vary substantially across countries and political regimes. In this paper, we examine how authoritarian and democratic political institutions can influence the quality of research in artificial intelligence, and the availability of large-scale datasets to improve and train deep learning algorithms. We focus mainly on the Chinese case, and find that - ceteris paribus - authoritarian political institutions continue to have a negative effect on innovation. They can, however, have a positive effect on research in deep learning, via the availability of large-scale datasets that have been obtained through government surveillance. We propose a research agenda to study which of the two effects might dominate in a race for leadership in artificial intelligence between countries with different political institutions, such as the United States and China.
    Keywords: Artificial intelligence, political institutions, big data, surveillance, innovation, China
    JEL: O25 O31 O38 P16 P51
    Date: 2021–11
  14. By: D Barrera (UNIANDES); S Cr\'epey (LPSM, UPCit\'e); E Gobet (CMAP, X); Hoang-Dung Nguyen (LPSM, UPCit\'e); B Saadeddine (UPS)
    Abstract: We propose a non-asymptotic convergence analysis of a two-step approach to learn a conditional value-at-risk (VaR) and expected shortfall (ES) in a nonparametric setting using Rademacher and Vapnik-Chervonenkis bounds. Our approach for the VaR is extended to the problem of learning at once multiple VaRs corresponding to different quantile levels. This results in efficient learning schemes based on neural network quantile and least-squares regressions. An a posteriori Monte Carlo (non-nested) procedure is introduced to estimate distances to the ground-truth VaR and ES without access to the latter. This is illustrated using numerical experiments in a Gaussian toy-model and a financial case-study where the objective is to learn a dynamic initial margin.
    Date: 2022–09
  15. By: Yukang Jiang; Xueqin Wang; Zhixi Xiong; Haisheng Yang; Ting Tian
    Abstract: The paper proposes a time-varying parameter global vector autoregressive (TVP-GVAR) framework for predicting and analysing developed region economic variables. We want to provide an easily accessible approach for the economy application settings, where a variety of machine learning models can be incorporated for out-of-sample prediction. The LASSO-type technique for numerically efficient model selection of mean squared errors (MSEs) is selected. We show the convincing in-sample performance of our proposed model in all economic variables and relatively high precision out-of-sample predictions with different-frequency economic inputs. Furthermore, the time-varying orthogonal impulse responses provide novel insights into the connectedness of economic variables at critical time points across developed regions. We also derive the corresponding asymptotic bands (the confidence intervals) for orthogonal impulse responses function under standard assumptions.
    Date: 2022–07
  16. By: Susan Athey; Dean Karlan; Emil Palikot; Yuan Yuan
    Abstract: Online platforms often face challenges being both fair (i.e., non-discriminatory) and efficient (i.e., maximizing revenue). Using computer vision algorithms and observational data from a micro-lending marketplace, we find that choices made by borrowers creating online profiles impact both of these objectives. We further support this conclusion with a web-based randomized survey experiment. In the experiment, we create profile images using Generative Adversarial Networks that differ in a specific feature and estimate it's impact on lender demand. We then counterfactually evaluate alternative platform policies and identify particular approaches to influencing the changeable profile photo features that can ameliorate the fairness-efficiency tension.
    Date: 2022–09
  17. By: Hugo Inzirillo; Ludovic De Villelongue
    Abstract: Deep learning is playing an increasingly important role in time series analysis. We focused on time series forecasting using attention free mechanism, a more efficient framework, and proposed a new architecture for time series prediction for which linear models seem to be unable to capture the time dependence. We proposed an architecture built using attention free LSTM layers that overcome linear models for conditional variance prediction. Our findings confirm the validity of our model, which also allowed to improve the prediction capacity of a LSTM, while improving the efficiency of the learning task.
    Date: 2022–09
  18. By: Joseph Jerome; Leandro Sanchez-Betancourt; Rahul Savani; Martin Herdegen
    Abstract: Within the mathematical finance literature there is a rich catalogue of mathematical models for studying algorithmic trading problems -- such as market-making and optimal execution -- in limit order books. This paper introduces \mbtgym, a Python module that provides a suite of gym environments for training reinforcement learning (RL) agents to solve such model-based trading problems. The module is set up in an extensible way to allow the combination of different aspects of different models. It supports highly efficient implementations of vectorized environments to allow faster training of RL agents. In this paper, we motivate the challenge of using RL to solve such model-based limit order book problems in mathematical finance, we explain the design of our gym environment, and then demonstrate its use in solving standard and non-standard problems from the literature. Finally, we lay out a roadmap for further development of our module, which we provide as an open source repository on GitHub so that it can serve as a focal point for RL research in model-based algorithmic trading.
    Date: 2022–09
  19. By: Hannes Mueller; Christopher Rauh; Alessandro Ruggieri
    Abstract: This document presents the outcome of two modules developed for the UK Foreign, Commonwealth Development Office (FCDO): 1) a forecast model which uses machine learning and text downloads to predict outbreaks and intensity of internal armed conflict. 2) A decision making module that embeds these forecasts into a model of preventing armed conflict damages. The outcome is a quantitative benchmark which should provide a testing ground for internal FCDO debates on both strategic levels (i.e. the process of deciding on country priorities) and operational levels (i.e. identifying critical periods by the country experts). Our method allows the FCDO to simulate policy interventions and changes in its strategic focus. We show, for example, that the FCDO should remain engaged in recently stabilized armed conflicts and re-think its development focus in countries with the highest risks. The total expected economic benefit of reinforced preventive efforts, as defined in this report, would bring monthly savings in expected costs of 26 billion USD with a monthly gain to the UK of 630 million USD.
    Keywords: C44, D74, E17
    Date: 2022–06
  20. By: Zegners, Dainis (Erasmus University Rotterdam); Sunde, Uwe (LMU Munich); Strittmatter, Anthony (CREST-ENSAE)
    Abstract: This paper presents a novel approach to analyze human decision-making that involves comparing the behavior of professional chess players relative to a computational benchmark of cognitively bounded rationality. This benchmark is constructed using algorithms of modern chess engines and allows investigating behavior at the level of individual move-by-move observations, thus representing a natural benchmark for computationally bounded optimization. The analysis delivers novel insights by isolating deviations from this benchmark of bounded rationality as well as their causes and consequences for performance. The findings document the existence of several distinct dimensions of behavioral deviations, which are related to asymmetric positional evaluation in terms of losses and gains, time pressure, fatigue, and complexity. The results also document that deviations from the benchmark do not necessarily entail worse performance. Faster decisions are associated with more frequent deviations from the benchmark, yet they are also associated with better performance. The findings are consistent with an important influence of intuition and experience, thereby shedding new light on the recent debate about computational rationality in cognitive processes.
    Keywords: cognitively bounded rationality; benchmark computing; artificial intelligence; decision quality; decision time;
    JEL: D01
    Date: 2020–12–22
  21. By: Nerijus Cerniauskas (Vilniaus Universitetas); Denisa Sologon (Luxembourg Institute of Socio-Economic Research (LISER, CEPS/INSTEAD)); Cathal O'Donoghue (National University of Ireland); Linas Tarasonis (Vilniaus Universitetas; Lietuvos Bankas)
    Abstract: We model the household disposable income distribution in Lithuania and explore the drivers of the increase in income inequality between 2007 and 2015. We quantify the contributions of four factors to changes in the disposable income distribution: (i) demographics; (ii) labor market structure; (iii) returns and prices; and (iv) tax–benefit system. Results show that the effects of the factors were substantial and reflected heterogeneous developments over two subperiods: changes in the tax and benefit system cushioned a rapid rise in market income inequality because of the global financial crisis during 2007–2011, but failed to do so during the subsequent years of economic expansion, when rising returns in the labor and capital markets significantly increased disposable income inequality. We also find that declining marriage rates contributed to the increase in income inequality in Lithuania.
    Keywords: income inequality, redistribution, decompositions, microsimulation, tax-benefit policies.
    JEL: D31 H23 J21
    Date: 2021
  22. By: Ovielt Baltodano L\'opez; Roberto Casarin
    Abstract: We propose a flexible stochastic block model for multi-layer networks, where layer-specific hidden Markov-chain processes drive the changes in the formation of communities. The changes in block membership of a node in a given layer may be influenced by its own past membership in other layers. This allows for clustering overlap, clustering decoupling, or more complex relationships between layers including settings of unidirectional, or bidirectional, block causality. We cope with the overparameterization issue of a saturated specification by assuming a Multi-Laplacian prior distribution within a Bayesian framework. Data augmentation and Gibbs sampling are used to make the inference problem more tractable. Through simulations, we show that the standard linear models are not able to detect the block causality under the great majority of scenarios. As an application to trade networks, we show that our model provides a unified framework including community detection and Gravity equation. The model is used to study the causality between trade agreements and trade looking at the global topological properties of the networks as opposed to the main existent approaches which focus on local bilateral relationships. We are able to provide new evidence of unidirectional causality from the free trade agreements network to the non-observable trade barriers network structure for 159 countries in the period 1995-2017.
    Date: 2022–09
  23. By: Zonghui Yao (Northeastern University); Dunia López-Pintado (Universidad Pablo de Olavide); Sara López-Pintado (Northeastern University)
    Abstract: The spread of a disease (idea or product) in a population is often hard to predict. In reality, we tend to observe only few specific realizations of the contagion process (e.g., the recent COVID-19 pandemic), therefore limited information can be obtained for predicting future similar events. In this work, we use large-scale simulations to study under different exogenous network properties the complete time course of the contagion process focusing on its unpredictability (or uncertainty). We exploit the functional nature of the data, i.e., the number of infected agents as a function of time, and propose a novel non-parametric measure of variance for functional data based on a weighted version of the depth-based central region area. This methodol-ogy is applied to the susceptible-infected-susceptible epidemiological model and the small-world networks. We find that the degree of uncer-tainty of a contagion process is a non-monotonic (increas-ing/decreasing) function of the contagion rate (the ratio between in-fectious and recovery probabilities). In particular, maximum uncertain-ty is attained at the “stable contagion threshold”, which represents the parameter conditions for which the endemic/steady state is reaching a plateau as a function of the contagion rate. The effect of the density of the net-work and the contagion rate are significant and quite similar, whereas the structure of the network, i.e., its amount of cluster-ing/randomness, has a mild effect on the contagion process.
    Keywords: contagion; uncertainty; functional data.
    JEL: C02 D80
    Date: 2022
  24. By: Antonio Cutanda (Universidad de Valencia, Valencia, Spain. ORCID number: 0000-0003-2066-4632); Juan A. Sanchis (Universidad de Valencia and ERICES, Valencia, Spain. ORCID number: 0000-0001-9664-4668)
    Abstract: This paper simulates the response of the Spanish labour supply to income tax changes using estimates for the intertemporal elasticity of substitution of leisure. These elasticities are calculated using a pseudo-panel built combining information of the EPA and of the ECPF, from 1987 to 1997. Our findings suggest that income tax changes can have an impact on Spanish labour supply, though the effects would be minor. We also uncover that this labour response differs across men and women, as well as between permanent and fixed-term contract workers. And that the responses differ depending on the age of the worker.
    Keywords: Labour Supply; Labour Income Tax; Intertemporal Elasticity of Substitution of Leisure; Simulations
    JEL: E62 H24 H31 J22
    Date: 2022–09

General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.