nep-cmp New Economics Papers
on Computational Economics
Issue of 2021‒08‒16
34 papers chosen by
Stan Miles
Thompson Rivers University

  1. Economic Recession Prediction Using Deep Neural Network By Zihao Wang; Kun Li; Steve Q. Xia; Hongfu Liu
  2. Deep equal risk pricing of financial derivatives with non-translation invariant risk measures By Alexandre Carbonneau; Fr\'ed\'eric Godin
  3. Machine Learning and Factor-Based Portfolio Optimization By Thomas Conlon; John Cotter; Iason Kynigakis
  4. Machine Learning Classification Methods and Portfolio Allocation: An Examination of Market Efficiency By Yang Bai; Kuntara Pukthuanthong
  5. Credit scoring using neural networks and SURE posterior probability calibration By Matthieu Garcin; Samuel Stéphan
  6. Factor Representation and Decision Making in Stock Markets Using Deep Reinforcement Learning By Zhaolu Dong; Shan Huang; Simiao Ma; Yining Qian
  7. Realised Volatility Forecasting: Machine Learning via Financial Word Embedding By Eghbal Rahimikia; Stefan Zohren; Ser-Huang Poon
  8. Neural network approximation for superhedging prices By Francesca Biagini; Lukas Gonon; Thomas Reitsam
  9. A Hybrid Learning Approach to Detecting Regime Switches in Financial Markets By Peter Akioyamen; Yi Zhou Tang; Hussien Hussien
  10. The market notices published by the Italian Stock Exchange: a machine learning approach for the selection of the relevant ones By Marta Bernardini; Paolo Massaro; Francesca Pepe; Francesco Tocco
  11. Application of classification algorithms for the assessment of confirmation to quality remarks By Fabio Zambuto; Simona Arcuti; Roberto Sabatini; Daniele Zambuto
  12. Mission-Oriented Policies and the “Entrepreneurial State” at Work: An Agent-Based Exploration By Giovanni Dosi; Francesco Lamperti; Mariana Mazzucato; Mauro Napoletano; Andrea Roventini
  13. Why East Asian students perform better in mathematics than their peers: An investigation using a machine learning approach By Hanol Lee; Jong-Wha Lee
  14. Hedging with linear regressions and neural networks By Ruf, Johannes; Wang, Weiguan
  15. Relational Graph Neural Networks for Fraud Detection in a Super-Appe nvironment By Jaime D. Acevedo-Viloria; Luisa Roa; Soji Adeshina; Cesar Charalla Olazo; Andr\'es Rodr\'iguez-Rey; Jose Alberto Ramos; Alejandro Correa-Bahnsen
  16. Markov based mesoscopic simulation tool for urban freight: SIMTURB By Mathieu Gardrat; Pascal Pluvinet
  17. Temporal-Relational Hypergraph Tri-Attention Networks for Stock Trend Prediction By Chaoran Cui; Xiaojie Li; Juan Du; Chunyun Zhang; Xiushan Nie; Meng Wang; Yilong Yin
  18. Graph-Based Learning for Stock Movement Prediction with Textual and Relational Data By Qinkai Chen; Christian-Yann Robert
  19. LocalGLMnet: interpretable deep learning for tabular data By Ronald Richman; Mario V. W\"uthrich
  20. Calibrating the Nelson-Siegel-Svensson Model by Genetic Algorithm By Asif Lakhany; Andrej Pintar; Amber Zhang
  21. Nighttime Light Intensity and Child Health Outcomes in Bangladesh By Mohammad Rafiqul Islam; Masud Alam; Munshi Naser \.Ibne Afzal
  22. A new universal child allowance in Italy: equity and efficiency concerns By Nicola Curci; Marco Savegnago
  23. A New Multi Objective Mathematical Model for Relief Distribution Location at Natural Disaster Response Phase By Mohamad Ebrahim Sadeghi; Morteza Khodabakhsh; Mahmood Reza Ganjipoor; Hamed Kazemipoor; Hamed Nozari
  24. On simulation of rough Volterra stochastic volatility models By Jan Matas; Jan Posp\'i\v{s}il
  25. Learning who is in the market from time series: market participant discovery through adversarial calibration of multi-agent simulators By Victor Storchan; Svitlana Vyetrenko; Tucker Balch
  26. What Individual Data Tells us about the Covid-19 Impact on Corporate Liquidity in 2020 By Bureau Benjamin,; Duquerroy Anne,; Giorgi Julien,; Lé Mathias,; Scott Suzanne,; Vinas Frédéric
  27. Corporate activity in France amid the Covid-19 crisis. A granular data analysis. By Bureau Benjamin,; Duquerroy Anne,; Giorgi Julien,; Lé Mathias,; Scott Suzanne,; Vinas Frédéric
  28. Text Semantics Capture Political and Economic Narratives By Elliott Ash; Germain Gauthier; Philine Widmer
  29. Simultaneous optimization of transformer tap changer and network capacitors to improve the distribution system’s static security considering distributed generation sources By Mortazi, Mohammad; Moradi, Ahmad; Khosravi, Mohsen
  30. Analyse du marché du travail à l’aide des données de Google Trends By Hugo Couture; Dalibor Stevanovic
  31. The financial market impact of ECB monetary policy press conferences - a text based approach By Parle, Conor
  32. Choose the school, choose the performance. New evidence on the determinants of student performance in eight European countries By Bonacini, Luca; Brunetti, Irene; Gallo, Giovanni
  33. Welfare resilience at the onset of the COVID-19 pandemic in a selection of European countries: impact on public finance and household incomes By Cantó, Olga; Figari, Francesco; Fiorio, Carlo V.; Kuypers, Sarah; Marchal, Sarah; Romaguera-de-la-Cruz, Marina; Tasseva, Iva V.; Verbist, Gerlinde
  34. Nested Pseudo Likelihood Estimation of Continuous-Time Dynamic Discrete Games By Jason R. Blevins; Minhae Kim

  1. By: Zihao Wang; Kun Li; Steve Q. Xia; Hongfu Liu
    Abstract: We investigate the effectiveness of different machine learning methodologies in predicting economic cycles. We identify the deep learning methodology of Bi-LSTM with Autoencoder as the most accurate model to forecast the beginning and end of economic recessions in the U.S. We adopt commonly-available macro and market-condition features to compare the ability of different machine learning models to generate good predictions both in-sample and out-of-sample. The proposed model is flexible and dynamic when both predictive variables and model coefficients vary over time. It provided good out-of-sample predictions for the past two recessions and early warning about the COVID-19 recession.
    Date: 2021–07
  2. By: Alexandre Carbonneau; Fr\'ed\'eric Godin
    Abstract: The use of non-translation invariant risk measures within the equal risk pricing (ERP) methodology for the valuation of financial derivatives is investigated. The ability to move beyond the class of convex risk measures considered in several prior studies provides more flexibility within the pricing scheme. In particular, suitable choices for the risk measure embedded in the ERP framework such as the semi-mean-square-error (SMSE) are shown herein to alleviate the price inflation phenomenon observed under Tail Value-at-Risk based ERP as documented for instance in Carbonneau and Godin (2021b). The numerical implementation of non-translation invariant ERP is performed through deep reinforcement learning, where a slight modification is applied to the conventional deep hedging training algorithm (see Buehler et al., 2019) so as to enable obtaining a price through a single training run for the two neural networks associated with the respective long and short hedging strategies. The accuracy of the neural network training procedure is shown in simulation experiments not to be materially impacted by such modification of the training algorithm.
    Date: 2021–07
  3. By: Thomas Conlon; John Cotter; Iason Kynigakis
    Abstract: We examine machine learning and factor-based portfolio optimization. We find that factors based on autoencoder neural networks exhibit a weaker relationship with commonly used characteristic-sorted portfolios than popular dimensionality reduction techniques. Machine learning methods also lead to covariance and portfolio weight structures that diverge from simpler estimators. Minimum-variance portfolios using latent factors derived from autoencoders and sparse methods outperform simpler benchmarks in terms of risk minimization. These effects are amplified for investors with an increased sensitivity to risk-adjusted returns, during high volatility periods or when accounting for tail risk.
    Date: 2021–07
  4. By: Yang Bai; Kuntara Pukthuanthong
    Abstract: We design a novel framework to examine market efficiency through out-of-sample (OOS) predictability. We frame the asset pricing problem as a machine learning classification problem and construct classification models to predict return states. The prediction-based portfolios beat the market with significant OOS economic gains. We measure prediction accuracies directly. For each model, we introduce a novel application of binomial test to test the accuracy of 3.34 million return state predictions. The tests show that our models can extract useful contents from historical information to predict future return states. We provide unique economic insights about OOS predictability and machine learning models.
    Date: 2021–08
  5. By: Matthieu Garcin (ESILV - Ecole Supérieure d'Ingénieurs Léonard de Vinci); Samuel Stéphan (ESILV - Ecole Supérieure d'Ingénieurs Léonard de Vinci, SAMM - Statistique, Analyse et Modélisation Multidisciplinaire (SAmos-Marin Mersenne) - UP1 - Université Paris 1 Panthéon-Sorbonne)
    Abstract: In this article we compare the performances of a logistic regression and a feed forward neural network for credit scoring purposes. Our results show that the logistic regression gives quite good results on the dataset and the neural network can improve a little the performance. We also consider different sets of features in order to assess their importance in terms of prediction accuracy. We found that temporal features (i.e. repeated measures over time) can be an important source of information resulting in an increase in the overall model accuracy. Finally, we introduce a new technique for the calibration of predicted probabilities based on Stein's unbiased risk estimate (SURE). This calibration technique can be applied to very general calibration functions. In particular, we detail this method for the sigmoid function as well as for the Kumaraswamy function, which includes the identity as a particular case. We show that stacking the SURE calibration technique with the classical Platt method can improve the calibration of predicted probabilities.
    Keywords: Deep learning,credit scoring,calibration,SURE
    Date: 2021–07–15
  6. By: Zhaolu Dong; Shan Huang; Simiao Ma; Yining Qian
    Abstract: Deep Reinforcement learning is a branch of unsupervised learning in which an agent learns to act based on environment state in order to maximize its total reward. Deep reinforcement learning provides good opportunity to model the complexity of portfolio choice in high-dimensional and data-driven environment by leveraging the powerful representation of deep neural networks. In this paper, we build a portfolio management system using direct deep reinforcement learning to make optimal portfolio choice periodically among S\&P500 underlying stocks by learning a good factor representation (as input). The result shows that an effective learning of market conditions and optimal portfolio allocations can significantly outperform the average market.
    Date: 2021–08
  7. By: Eghbal Rahimikia; Stefan Zohren; Ser-Huang Poon
    Abstract: We develop FinText, a novel, state-of-the-art, financial word embedding from Dow Jones Newswires Text News Feed Database. Incorporating this word embedding in a machine learning model produces a substantial increase in volatility forecasting performance on days with volatility jumps for 23 NASDAQ stocks from 27 July 2007 to 18 November 2016. A simple ensemble model, combining our word embedding and another machine learning model that uses limit order book data, provides the best forecasting performance for both normal and jump volatility days. Finally, we use Integrated Gradients and SHAP (SHapley Additive exPlanations) to make the results more 'explainable' and the model comparisons more transparent.
    Date: 2021–08
  8. By: Francesca Biagini; Lukas Gonon; Thomas Reitsam
    Abstract: This article examines neural network-based approximations for the superhedging price process of a contingent claim in a discrete time market model. First we prove that the $\alpha$-quantile hedging price converges to the superhedging price at time $0$ for $\alpha$ tending to $1$, and show that the $\alpha$-quantile hedging price can be approximated by a neural network-based price. This provides a neural network-based approximation for the superhedging price at time $0$ and also the superhedging strategy up to maturity. To obtain the superhedging price process for $t>0$, by using the Doob decomposition it is sufficient to determine the process of consumption. We show that it can be approximated by the essential supremum over a set of neural networks. Finally, we present numerical results.
    Date: 2021–07
  9. By: Peter Akioyamen (Western University); Yi Zhou Tang (Western University); Hussien Hussien (Western University)
    Abstract: Financial markets are of much interest to researchers due to their dynamic and stochastic nature. With their relations to world populations, global economies and asset valuations, understanding, identifying and forecasting trends and regimes are highly important. Attempts have been made to forecast market trends by employing machine learning methodologies, while statistical techniques have been the primary methods used in developing market regime switching models used for trading and hedging. In this paper we present a novel framework for the detection of regime switches within the US financial markets. Principal component analysis is applied for dimensionality reduction and the k-means algorithm is used as a clustering technique. Using a combination of cluster analysis and classification, we identify regimes in financial markets based on publicly available economic data. We display the efficacy of the framework by constructing and assessing the performance of two trading strategies based on detected regimes.
    Date: 2021–08
  10. By: Marta Bernardini (Bank of Italy); Paolo Massaro (Bank of Italy); Francesca Pepe (Bank of Italy); Francesco Tocco (Bank of Italy)
    Abstract: Bank of Italy data managers check the market notices published daily by the Italian Stock Exchange (Borsa Italiana) and select those of interest to update the Bank of Italy's Securities Database. This activity is time-consuming and prone to errors should a data manager overlook a relevant notice. In this paper we describe the implementation of a supervised model to automatically select the market notices. The model outperforms the manual approach used by data managers and can therefore be implemented in the regular process to update the Securities Database.
    Keywords: machine learning, Securities Database, automatic selection, Italian Stock Exchange
    JEL: C18 C81 G23
    Date: 2021–07
  11. By: Fabio Zambuto (Bank of Italy); Simona Arcuti (Bank of Italy); Roberto Sabatini (Bank of Italy); Daniele Zambuto
    Abstract: In the context of the data quality management of supervisory banking data, the Bank of Italy receives a significant number of data reports at various intervals from Italian banks. If any anomalies are found, a quality remark is sent back, questioning the data submitted. This process can lead to the bank in question confirming or revising the data it previously transmitted. We propose an innovative methodology, based on text mining and machine learning techniques, for the automatic processing of the data confirmations received from banks. A classification model is employed to predict whether these confirmations should be accepted or rejected based on the reasons provided by the reporting banks, the characteristics of the validation quality checks, and reporting behaviour across the banking system. The model was trained on past cases already labelled by data managers and its performance was assessed against a set of cross-checked cases that were used as gold standard. The empirical findings show that the methodology predicts the correct decisions on recurrent data confirmations and that the performance of the proposed model is comparable to that of data managers currently engaged in data analysis.
    Keywords: supervisory banking data, data quality management, machine learning, text mining, latent dirichlet allocation, gradient boosting.
    JEL: C18 C81 G21
    Date: 2021–07
  12. By: Giovanni Dosi (LEM - Laboratory of Economics and Management - SSSUP - Scuola Universitaria Superiore Sant'Anna [Pisa]); Francesco Lamperti (UP1 - Université Paris 1 Panthéon-Sorbonne); Mariana Mazzucato; Mauro Napoletano (OFCE - Observatoire français des conjonctures économiques - Sciences Po - Sciences Po); Andrea Roventini
    Abstract: We study the impact of alternative innovation policies on the short- and long-run performance of the economy, as well as on public finances, extending the Schumpeter meeting Keynes agent-based model (Dosi et al., 2010). In particular, we consider market-based innovation policies such as R&D subsidies to firms, tax discount on investment, and direct policies akin to the "Entrepreneurial State" (Mazzucato, 2013), involving the creation of public research oriented firms diffusing technologies along specific trajectories, and funding a Public Research Lab conducting basic research to achieve radical innovations that enlarge the technological opportunities of the economy. Simu- lation results show that all policies improve productivity and GDP growth, but the best outcomes are achieved by active discretionary State policies, which are also able to crowd-in private investment and have positive hysteresis effects on growth dynamics. For the same size of public resources allocated to market-based interventions, "Mission" innovation policies deliver significantly better aggregate performance if the government is patient enough and willing to bear the intrinsic risks related to innovative activities.
    Keywords: Innovation policy,mission-oriented R&D,entrepreneurial state,agent-based modelling
    Date: 2021–01–01
  13. By: Hanol Lee; Jong-Wha Lee
    Abstract: Using a machine learning approach, we attempt to identify the school-, student-, and country-related factors that predict East Asian students’ higher PISA mathematics scores compared to their international peers. We identify student- and school-related factors, such as metacognition–assess credibility, mathematics learning time, early childhood education and care, grade repetition, school type and size, class size, and student behavior hindering learning, as important predictors of the higher average mathematics scores of East Asian students. Moreover, country-level factors, such as the proportion of youth not in education, training, or employment and the number of R&D researchers, are also found to have high predicting power. The results also highlight the nonlinear and complex relationships between educational inputs and outcomes.
    Keywords: education, East Asia, machine learning, mathematics test score, PISA
    JEL: C53 C55 I21 J24 O1
    Date: 2021–07
  14. By: Ruf, Johannes; Wang, Weiguan
    Abstract: We study neural networks as nonparametric estimation tools for the hedging of options. To this end, we design a network, named HedgeNet, that directly outputs a hedging strategy. This network is trained to minimize the hedging error instead of the pricing error. Applied to end-of-day and tick prices of S&P 500 and Euro Stoxx 50 options, the network is able to reduce the mean squared hedging error of the Black-Scholes benchmark significantly. However, a similar benefit arises by simple linear regressions that incorporate the leverage effect.
    Keywords: benchmarking; Black-Scholes; data Leakage; hedging error; leverage effect; statistical hedging; Taylor & Francis deal
    JEL: J1 C1
    Date: 2021–06–30
  15. By: Jaime D. Acevedo-Viloria; Luisa Roa; Soji Adeshina; Cesar Charalla Olazo; Andr\'es Rodr\'iguez-Rey; Jose Alberto Ramos; Alejandro Correa-Bahnsen
    Abstract: Large digital platforms create environments where different types of user interactions are captured, these relationships offer a novel source of information for fraud detection problems. In this paper we propose a framework of relational graph convolutional networks methods for fraudulent behaviour prevention in the financial services of a Super-App. To this end, we apply the framework on different heterogeneous graphs of users, devices, and credit cards; and finally use an interpretability algorithm for graph neural networks to determine the most important relations to the classification task of the users. Our results show that there is an added value when considering models that take advantage of the alternative data of the Super-App and the interactions found in their high connectivity, further proofing how they can leverage that into better decisions and fraud detection strategies.
    Date: 2021–07
  16. By: Mathieu Gardrat (LAET - Laboratoire Aménagement Économie Transports - UL2 - Université Lumière - Lyon 2 - ENTPE - École Nationale des Travaux Publics de l'État - CNRS - Centre National de la Recherche Scientifique); Pascal Pluvinet (LAET - Laboratoire Aménagement Économie Transports - UL2 - Université Lumière - Lyon 2 - ENTPE - École Nationale des Travaux Publics de l'État - CNRS - Centre National de la Recherche Scientifique)
    Abstract: The objective of this paper is to present a mesoscopic simulation model of urban freight transport called SIMTURB. This model is based on the results and is an extension of the FRETURB urban freight model [1]. With an architecture based on a Markov process, this model offers a complement and to some extent an alternative to multi-agent simulation models, since it makes possible to characterise precisely the routes of freight transport vehicles in a conurbation and characterise the movements of each agent (e.g. vehicle).
    Keywords: Urban freight transport,Model,SIMTURB,Multi-agent simulation,Markov process,Working Papers du LAET
    Date: 2021–06
  17. By: Chaoran Cui; Xiaojie Li; Juan Du; Chunyun Zhang; Xiushan Nie; Meng Wang; Yilong Yin
    Abstract: Predicting the future price trends of stocks is a challenging yet intriguing problem given its critical role to help investors make profitable decisions. In this paper, we present a collaborative temporal-relational modeling framework for end-to-end stock trend prediction. The temporal dynamics of stocks is firstly captured with an attention-based recurrent neural network. Then, different from existing studies relying on the pairwise correlations between stocks, we argue that stocks are naturally connected as a collective group, and introduce the hypergraph structures to jointly characterize the stock group-wise relationships of industry-belonging and fund-holding. A novel hypergraph tri-attention network (HGTAN) is proposed to augment the hypergraph convolutional networks with a hierarchical organization of intra-hyperedge, inter-hyperedge, and inter-hypergraph attention modules. In this manner, HGTAN adaptively determines the importance of nodes, hyperedges, and hypergraphs during the information propagation among stocks, so that the potential synergies between stock movements can be fully exploited. Extensive experiments on real-world data demonstrate the effectiveness of our approach. Also, the results of investment simulation show that our approach can achieve a more desirable risk-adjusted return. The data and codes of our work have been released at
    Date: 2021–07
  18. By: Qinkai Chen; Christian-Yann Robert
    Abstract: Predicting stock prices from textual information is a challenging task due to the uncertainty of the market and the difficulty understanding the natural language from a machine's perspective. Previous researches focus mostly on sentiment extraction based on single news. However, the stocks on the financial market can be highly correlated, one news regarding one stock can quickly impact the prices of other stocks. To take this effect into account, we propose a new stock movement prediction framework: Multi-Graph Recurrent Network for Stock Forecasting (MGRN). This architecture allows to combine the textual sentiment from financial news and multiple relational information extracted from other financial data. Through an accuracy test and a trading simulation on the stocks in the STOXX Europe 600 index, we demonstrate a better performance from our model than other benchmarks.
    Date: 2021–07
  19. By: Ronald Richman; Mario V. W\"uthrich
    Abstract: Deep learning models have gained great popularity in statistical modeling because they lead to very competitive regression models, often outperforming classical statistical models such as generalized linear models. The disadvantage of deep learning models is that their solutions are difficult to interpret and explain, and variable selection is not easily possible because deep learning models solve feature engineering and variable selection internally in a nontransparent way. Inspired by the appealing structure of generalized linear models, we propose a new network architecture that shares similar features as generalized linear models, but provides superior predictive power benefiting from the art of representation learning. This new architecture allows for variable selection of tabular data and for interpretation of the calibrated deep learning model, in fact, our approach provides an additive decomposition in the spirit of Shapley values and integrated gradients.
    Date: 2021–07
  20. By: Asif Lakhany; Andrej Pintar; Amber Zhang
    Abstract: Accurately fitting the term structure of interest rates is critical to central banks and other market participants. The Nelson-Siegel and Nelson-Siegel-Svensson models are probably the best-known models for this purpose due to their intuitive appeal and simple representation. However, this simplicity comes at a price. The difficulty in calibrating these models is twofold. Firstly, the objective function being minimized during the calibration procedure is nonlinear and has multiple local optima. Secondly, there is strong co-dependence among the model parameters. As a result, their estimated values behave erratically over time. To avoid these problems, we apply a heuristic optimization method, specifically the Genetic Algorithm approach, and show that it is able to construct reliable interest rate curves and stable model parameters over time, regardless of the shape of the curves.
    Date: 2021–08
  21. By: Mohammad Rafiqul Islam; Masud Alam; Munshi Naser \.Ibne Afzal
    Abstract: This study examines the impact of nighttime light intensity on child health outcomes in Bangladesh. We use nighttime light intensity as a proxy measure of urbanization and argue that the higher intensity of nighttime light, the higher is the degree of urbanization, which positively affects child health outcomes. In econometric estimation, we employ a methodology that combines parametric and non-parametric approaches using the Gradient Boosting Machine (GBM), K-Nearest Neighbors (KNN), and Bootstrap Aggregating that originate from machine learning algorithms. Based on our benchmark estimates, findings show that one standard deviation increase of nighttime light intensity is associated with a 1.515 rise of Z-score of weight for age after controlling for several control variables. The maximum increase of weight for height and height for age score range from 5.35 to 7.18 units. To further understand our benchmark estimates, generalized additive models also provide a robust positive relationship between nighttime light intensity and children's health outcomes. Finally, we develop an economic model that supports the empirical findings of this study that the marginal effect of urbanization on children's nutritional outcomes is strictly positive.
    Date: 2021–08
  22. By: Nicola Curci (Bank of Italy); Marco Savegnago (Bank of Italy)
    Abstract: The paper discusses a possible scheme for a new universal child allowance (assegno unico e universale per i figli, AUU) and evaluates its effects on income distribution (equity) and on financial disincentives to work (efficiency). The analysis, carried out using the Banca d'Italia tax-benefit microsimulation model BIMic, takes into account the principles defined in the enabling law recently approved by the Parliament and the budgetary resources set aside for this measure. The scheme envisaged in the paper differs from the proposals discussed so far in public debate about the AUU due to a significant innovation, namely the introduction of an in-work benefit component. The simulated reform would not only reduce the inequality of disposable income with respect to the current legislation scenario, but also – due to the above mentioned in-work benefit – would lessen the financial disincentives to labor market participation for potential female workers. The latter result is particularly strong for low-income households.
    Keywords: family policies, redistribution, equity, efficiency, microsimulation
    JEL: D31
    Date: 2021–07
  23. By: Mohamad Ebrahim Sadeghi; Morteza Khodabakhsh; Mahmood Reza Ganjipoor; Hamed Kazemipoor; Hamed Nozari
    Abstract: Every year, natural disasters such as earthquake, flood, hurricane and etc. impose immense financial and humane losses on governments owing to their unpredictable character and arise of emergency situations and consequently the reduction of the abilities due to serious damages to infrastructures, increases demand for logistic services and supplies. First, in this study the necessity of paying attention to locating procedures in emergency situations is pointed out and an outline for the studied case of disaster relief supply chain was discussed and the problem was validated at small scale. On the other hand, to solve this kind of problems involving three objective functions and complicated time calculation, meta-heuristic methods which yield almost optimum solutions in less time are applied. The EC method and NSGA II algorithm are among the evolutionary multi-objective optimization algorithms applied in this case. In this study the aforementioned algorithm is used for solving problems at large scale.
    Date: 2021–08
  24. By: Jan Matas; Jan Posp\'i\v{s}il
    Abstract: Rough Volterra volatility models are a progressive and promising field of research in derivative pricing. Although rough fractional stochastic volatility models already proved to be superior in real market data fitting, techniques used in simulation of these models are still inefficient in terms of speed and accuracy. This paper aims to present the accurate tools and techniques that could be used also in nowadays largely emerging pricing methods based on machine learning. In particular, we compare three widely used simulation methods: the Cholesky method, the Hybrid scheme, and the rDonsker scheme. We also comment on implementation of variance reduction techniques. In particular, we show the obstacles of the so-called turbocharging technique whose performance is sometimes contra productive. To overcome these obstacles, we suggest several modifications.
    Date: 2021–07
  25. By: Victor Storchan; Svitlana Vyetrenko; Tucker Balch
    Abstract: In electronic trading markets often only the price or volume time series, that result from interaction of multiple market participants, are directly observable. In order to test trading strategies before deploying them to real-time trading, multi-agent market environments calibrated so that the time series that result from interaction of simulated agents resemble historical are often used. To ensure adequate testing, one must test trading strategies in a variety of market scenarios -- which includes both scenarios that represent ordinary market days as well as stressed markets (most recently observed due to the beginning of COVID pandemic). In this paper, we address the problem of multi-agent simulator parameter calibration to allow simulator capture characteristics of different market regimes. We propose a novel two-step method to train a discriminator that is able to distinguish between "real" and "fake" price and volume time series as a part of GAN with self-attention, and then utilize it within an optimization framework to tune parameters of a simulator model with known agent archetypes to represent a market scenario. We conclude with experimental results that demonstrate effectiveness of our method.
    Date: 2021–08
  26. By: Bureau Benjamin,; Duquerroy Anne,; Giorgi Julien,; Lé Mathias,; Scott Suzanne,; Vinas Frédéric
    Abstract: Using rich granular data for over 645 000 French firms in 2020, this paper builds a micro-simulation model to assess the impact of the Covid-19 crisis on corporate liquidity. Going beyond the aggregate picture, we document that while net debt has been fairly stable at the macroeconomic level, individual heterogeneity is widespread. Significant dispersion in changes in net debt prevails both between and within industries, before as well as after public support. We show that the probability to experience a negative liquidity shock - as well as the intensity of this shock - are negatively correlated with the initial credit quality of the firm (based on Banque de France internal ratings). Our model also finds that public support dampens significantly the impact of Covid on the dispersion of liquidity shocks and brings back the distribution of liquidity shocks closer to its pre-crisis path but with fatter tails.
    Keywords: Covid-19; Micro-simulation; Non-financial Corporations; Cash Holding; Debt
    JEL: D22 G32 G38
    Date: 2021
  27. By: Bureau Benjamin,; Duquerroy Anne,; Giorgi Julien,; Lé Mathias,; Scott Suzanne,; Vinas Frédéric
    Abstract: Taking advantage of detailed firm-level data on VAT returns, we estimate the monthly impact of the Covid-19 crisis on the turnover of more than 645,000 French firms. Our approach, based on a micro-simulation model, is innovative in a triple way. Firstly, we quantify the activity loss with respect to a counterfactual situation in which the crisis would not have hit. Secondly, we estimate this shock at the firm level, enabling a thorough analysis of activity loss heterogeneity throughout the crisis. In particular, we shade light on the dispersion of the shock both within and between industries. We show that the industry the firm operates in explains up to 48% of the monthly activity shocks’ variance weighted by employment, a much larger share than in a normal year. Finally, we leverage our monthly firm-level data on sales to show how corporate activity has evolved along four distinct trajectories throughout 2020. The main determinant of belonging to a given profile of activity is the firm industry – defined at a very granular level. Conditional on industry, the activity trajectory is also correlated with the ability to adapt some firms have demonstrated during the crisis in terms of organization and production.
    Keywords: Covid-19 ; business dynamics ; micro-simulation ; non-financial corporations
    JEL: D22 G38 H32
    Date: 2021
  28. By: Elliott Ash; Germain Gauthier; Philine Widmer
    Abstract: Social scientists have become increasingly interested in how narratives -- the stories in fiction, politics, and life -- shape beliefs, behavior, and government policies. This paper provides a novel, unsupervised method to quantify latent narrative structures in text, drawing on computational linguistics tools for clustering coherent entity groups and capturing relations between them. After validating the method, we provide an application to the U.S. Congressional Record to analyze political and economic narratives in recent decades. Our analysis highlights the dynamics, sentiment, polarization, and interconnectedness of narratives in political discourse.
    Date: 2021–08
  29. By: Mortazi, Mohammad; Moradi, Ahmad; Khosravi, Mohsen
    Abstract: Voltage control and reactive power play an important role in the operation of the distribution network. Accordingly, conventional methods such as the installation of a capacitor in an optimum location with a proper capacity and optimal transformer tap setting which has an impressive effect on voltage control and reactive power are used. But the study on the simultaneous use of these two methods is limited and it seems necessary to be conducted. These days the presence of Distributed Generation (DG) resources has grown in distribution networks. The presence of distributed generation resources has a great influence on the voltage profile due to the radial structure of the distribution network and the low X/R ratio. Therefore, it is necessary to consider the optimal coordination of the use of switchable capacitors and the setting of transformer taps in the presence of distributed generation resources to improve the voltage profile and reduce losses. This paper examines the simultaneous use of capacitors and transformer taps in distribution networks to reduce the voltage deviation and distribution losses in the presence of distributed generation resources. In order to explain the objectives, six different operation scenarios have been defined and studied. The above study is implemented based on the IEEE, 13 and 34 bus standard networks and the results are presented. The presented results clearly indicate the necessity of coordinating the use of these tools in distribution networks.
    Keywords: Capacitor, Distributed Generation Resources, Loss Reduction, Particle Swarm Optimization Algorithm, Transformer Tap Changer, Voltage Adjustment
    JEL: O14 O32 Q32
    Date: 2020–07–30
  30. By: Hugo Couture; Dalibor Stevanovic
    Abstract: In this report, we evaluate the relevance of weekly Google search query data for current and next month prediction on several labour market variables in Canada and Quebec. Several types of mixed-frequency models are considered and their performance is evaluated in an out-of-sample forecasting exercise spanning the period 2014M09 - 2019M09. Google Trends improve the accuracy of forecasts of the employment rate, hours worked and unemployment rate. The availability of this data in high frequency is crucial. Their contribution is important especially during the first two weeks of the month, so when Labor Force Survey data are not yet available for the last month. Dans ce rapport, nous évaluons la pertinence des données hebdomadaires des requêtes faites sur le moteur de recherche de Google au niveau de la prédiction du mois courant et du prochain mois sur plusieurs variables du marché d’emploi au Canada et au Québec. Plusieurs types de modèles en fréquence mixte sont considérés et leur performance est évaluée dans un exercice de prévision hors échantillon s’étalant sur la période 2014M09 - 2019M09. Les Google Trends améliorent la précision des prévisions du taux d’emploi, des heures travaillées et du taux de chômage. La disponibilité de ces données en haute fréquence est cruciale. Leur apport est important surtout durant les deux premières semaines du mois, donc lorsque les données de l’Enquête sur la population active ne sont pas encore disponibles pour le dernier mois.
    Keywords: Forecasting,Macroeconomics,Job market,Google Trends,Machine Learning, Prévision,Macroéconomie,Marché d’emploi,Google Trends,Machine Learning
    JEL: C53 C55 E37
    Date: 2021–08–02
  31. By: Parle, Conor (Central Bank of Ireland)
    Abstract: Using methods from natural language processing I create two measures of the monetary policy tilt of the ECB entitled the “Hawk-Dove Indices”, that outline the beliefs of the ECB on the current state of the economy and the outlook for growth and inflation. These measures closely track interest rate expectations over the tightening and loosening cycle, and can provide a useful measure of monetary policy tilt at zero lower bound episodes and contains information about the state of the economy. I exploit the time lag between decision announcements and the ECB’s monetary policy press conference to assess the immediate financial market impact of changes in communication within the press conference, free from the effects of the shock from the monetary policy decision. Consistent with the literature on the information channel of monetary policy, I find a non-negligible positive (negative) effect on stock prices of a more hawkish (dovish) tone in the press conference, indicating that the ECB reveals “private information” during these press conferences, and that market participants internalise this as good (bad) news regarding the future state of the economy, rather than internalising a future potential increase (decrease) in interest rates. This effect is stronger prior to the introduction of formal forward guidance, suggesting that since then ECB communication has been less surprising to markets in recent times.
    Keywords: Monetary policy, communication, machine learning, natural language processing, event study, information effects
    JEL: E52 E58 C55
    Date: 2021–05
  32. By: Bonacini, Luca; Brunetti, Irene; Gallo, Giovanni
    Abstract: This study aims to identify the main determinants of student performance in reading and maths across eight European Union countries (Austria, Croatia, Germany, Hungary, Italy, Portugal, Slovakia, and Slovenia). Based on student-level data from the OECD’s PISA 2018 survey and by means of the application of efficient algorithms, we highlight that the number of books at home and a variable combining the type and location of their school represent the most important predictors of student performance in all of the analysed countries, while other school characteristics are rarely relevant. Econometric results show that students attending vocational schools perform significantly worse than those in general schools, except in Portugal. Considering only general school students, the differences between big and small cities are not statistically significant, while among students in vocational schools, those in a small city tend to perform better than those in a big city. Through the Gelbach decomposition method, which allows measuring the relative importance of observable characteristics in explaining a gap, we show that the differences in test scores between big and small cities depend on school characteristics, while the differences between general and vocational schools are mainly explained by family social status.
    Keywords: Gelbach decomposition,Education inequalities,Machine learning,PISA,Schooling tracking,Student performance
    JEL: I21 I24 J24
    Date: 2021
  33. By: Cantó, Olga; Figari, Francesco; Fiorio, Carlo V.; Kuypers, Sarah; Marchal, Sarah; Romaguera-de-la-Cruz, Marina; Tasseva, Iva V.; Verbist, Gerlinde
    Abstract: This paper assesses the impact on household incomes of the COVID-19 pandemic and governments’ policy responses in April 2020 in four large and severely hit EU countries: Belgium, Italy, Spain and the UK. We provide comparative evidence on the level of relative and absolute welfare resilience at the onset of the pandemic, by creating counterfactual scenarios using the European tax-benefit model EUROMOD combined with COVID-19-related household surveys and timely labor market data. We find that income poverty increased in all countries due to the pandemic while inequality remained broadly the same. Differences in the impact of policies across countries arose from four main sources: the asymmetric dimension of the shock by country, the different protection offered by each tax-benefit system, the diverse design of discretionary measures and differences in the household level circumstances and living arrangements of individuals at risk of income loss in each country.
    Keywords: Covid-19; cross-country comparison; household incomes; income protection; tax-benefit microsimulation; coronavirus
    JEL: D31 H55 I32
    Date: 2021–07–01
  34. By: Jason R. Blevins; Minhae Kim
    Abstract: We introduce a sequential estimator for continuous time dynamic discrete choice models (single-agent models and games) by adapting the nested pseudo likelihood (NPL) estimator of Aguirregabiria and Mira (2002, 2007), developed for discrete time models with discrete time data, to the continuous time case with data sampled either discretely (i.e., uniformly-spaced snapshot data) or continuously. We establish conditions for consistency and asymptotic normality of the estimator, a local convergence condition, and, for single agent models, a zero Jacobian property assuring local convergence. We carry out a series of Monte Carlo experiments using an entry-exit game with five heterogeneous firms to confirm the large-sample properties and demonstrate finite-sample bias reduction via iteration. In our simulations we show that the convergence issues documented for the NPL estimator in discrete time models are less likely to affect comparable continuous-time models. We also show that there can be large bias in economically-relevant parameters, such as the competitive effect and entry cost, from estimating a misspecified discrete time model when in fact the data generating process is a continuous time model.
    Date: 2021–08

This nep-cmp issue is ©2021 by Stan Miles. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.