nep-cmp New Economics Papers
on Computational Economics
Issue of 2022‒04‒04
eighteen papers chosen by



  1. Stock Embeddings: Learning Distributed Representations for Financial Assets By Rian Dolphin; Barry Smyth; Ruihai Dong
  2. Explainable Artificial Intelligence: interpreting default forecasting models based on Machine Learning By Giuseppe Cascarino; Mirko Moscatelli; Fabio Parlapiano
  3. Volatility forecasting with machine learning and intraday commonality By Chao Zhang; Yihuang Zhang; Mihai Cucuringu; Zhongmin Qian
  4. An SMP-Based Algorithm for Solving the Constrained Utility Maximization Problem via Deep Learning By Kristof Wiedermann
  5. Interpolation of temporal biodiversity change, loss, and gain across scales: a machine learning approach By Keil, Petr; Chase, Jonathan
  6. Reciprocity in Machine Learning By Mukund Sundararajan; Walid Krichene
  7. Stripping the Discount Curve - a Robust Machine Learning Approach By Damir Filipović; Markus Pelger; Ye Ye
  8. Understanding the macroeconomic effects of public research: An application of a regression-microfounded CGE-model to the case of the Fraunhofer-Gesellschaft in Germany By Grant, Allan; Figus, Gioele; Schubert, Torben
  9. State or market: Investments in new nuclear power plants in France and their domestic and cross-border effects By Zimmermann, Florian; Keles, Dogan
  10. Measuring the Impact of Campaign Finance on Congressional Voting: A Machine Learning Approach By Matthias Lalisse
  11. Inteligencia artificial: una reevaluación By Andrés Fernández Díaz; Benito Rodríguez Mallol
  12. JAQ of All Trades: Job Mismatch, Firm Productivity and Managerial Quality By Luca Coraggio; Marco Pagano; Annalisa Scognamiglio; Joacim Tåg
  13. Toward an efficient hybrid method for pricing barrier options on assets with stochastic volatility By Alexander Lipton; Artur Sepp
  14. Valuation of accounts national in the system of economy of Peru, 2007-2020 By Lugo, Josefrank Pernalete; Rossel, Ysaelen Josefina Odor
  15. Dynamic causal effects evaluation in A/B testing with a reinforcement learning framework By Shi, Chengchun; Wang, Xiaoyu; Luo, Shikai; Zhu, Hongtu; Ye, Jieping; Song, Rui
  16. Strong core and Pareto-optimal solutions for the multiple partners matching problem under lexicographic preferences By P\'eter Bir\'o; Gergely Cs\'aji
  17. ifo DSGE Model 2.0 By Radek Šauer
  18. A German inflation narrative. How the media frame price dynamics: Results from a RollingLDA analysis By Müller, Henrik; Schmidt, Tobias; Rieger, Jonas; Hufnagel, Lena Marie; Hornig, Nico

  1. By: Rian Dolphin; Barry Smyth; Ruihai Dong
    Abstract: Identifying meaningful relationships between the price movements of financial assets is a challenging but important problem in a variety of financial applications. However with recent research, particularly those using machine learning and deep learning techniques, focused mostly on price forecasting, the literature investigating the modelling of asset correlations has lagged somewhat. To address this, inspired by recent successes in natural language processing, we propose a neural model for training stock embeddings, which harnesses the dynamics of historical returns data in order to learn the nuanced relationships that exist between financial assets. We describe our approach in detail and discuss a number of ways that it can be used in the financial domain. Furthermore, we present the evaluation results to demonstrate the utility of this approach, compared to several important benchmarks, in two real-world financial analytics tasks.
    Date: 2022–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2202.08968&r=
  2. By: Giuseppe Cascarino (Bank of Italy); Mirko Moscatelli (Bank of Italy); Fabio Parlapiano (Bank of Italy)
    Abstract: Forecasting models based on machine learning (ML) algorithms have been shown to outperform traditional models in several applications. The lack of an easily interpretable functional form, however, is a major challenge for their adoption, especially when a knowledge of the estimated relationships and an explanation of individual forecasts are needed, for instance due to regulatory requirements or when forecasts are used in policy making. We apply some of the most established methods from the eXplainable Artificial Intelligence (XAI) literature to shed light on the random forest corporate default forecasting model in Moscatelli et al. (2019) applied to Italian non-financial firms. The methods provide insight into the relative importance of financial and credit variables to predict firms’ financial distress. We complement the analysis by showing how the importance of these variables in explaining default risk changes over time in the period 2009-19. When financial conditions deteriorate, the variables characterized by a more complex relationship with financial distress, such as firms’ liquidity and indebtedness indicators, become more important in predicting borrowers’ defaults. We also discuss how ML models could enhance the accuracy of credit assessment for those borrowers with less developed credit relationships such as smaller firms
    Keywords: explainable artificial intelligence, model-agnostic explainability, artificial intelligence, machine learning, credit scoring, fintech
    JEL: G2 C52 C55 D83
    Date: 2022–03
    URL: http://d.repec.org/n?u=RePEc:bdi:opques:qef_674_22&r=
  3. By: Chao Zhang; Yihuang Zhang; Mihai Cucuringu; Zhongmin Qian
    Abstract: We apply machine learning models to forecast intraday realized volatility (RV), by exploiting commonality in intraday volatility via pooling stock data together, and by incorporating a proxy for the market volatility. Neural networks dominate linear regressions and tree models in terms of performance, due to their ability to uncover and model complex latent interactions among variables. Our findings remain robust when we apply trained models to new stocks that have not been included in the training set, thus providing new empirical evidence for a universal volatility mechanism among stocks. Finally, we propose a new approach to forecasting one-day-ahead RVs using past intraday RVs as predictors, and highlight interesting diurnal effects that aid the forecasting mechanism. The results demonstrate that the proposed methodology yields superior out-of-sample forecasts over a strong set of traditional baselines that only rely on past daily RVs.
    Date: 2022–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2202.08962&r=
  4. By: Kristof Wiedermann
    Abstract: We consider the utility maximization problem under convex constraints with regard to theoretical results which allow the formulation of algorithmic solvers which make use of deep learning techniques. In particular for the case of random coefficients, we prove a stochastic maximum principle (SMP), which also holds for utility functions $U$ with $\mathrm{id}_{\mathbb{R}^{+}} \cdot U'$ being not necessarily nonincreasing, like the power utility functions, thereby generalizing the SMP proved by Li and Zheng (2018). We use this SMP together with the strong duality property for defining a new algorithm, which we call deep primal SMP algorithm. Numerical examples illustrate the effectiveness of the proposed algorithm - in particular for higher-dimensional problems and problems with random coefficients, which are either path dependent or satisfy their own SDEs. Moreover, our numerical experiments for constrained problems show that the novel deep primal SMP algorithm overcomes the deep SMP algorithm's (see Davey and Zheng (2021)) weakness of erroneously producing the value of the corresponding unconstrained problem. Furthermore, in contrast to the deep controlled 2BSDE algorithm from Davey and Zheng (2021), this algorithm is also applicable to problems with path dependent coefficients. As the deep primal SMP algorithm even yields the most accurate results in many of our studied problems, we can highly recommend its usage. Moreover, we propose a learning procedure based on epochs which improved the results of our algorithm even further. Implementing a semi-recurrent network architecture for the control process turned out to be also a valuable advancement.
    Date: 2022–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2202.07771&r=
  5. By: Keil, Petr; Chase, Jonathan
    Abstract: 1. Estimates of temporal change of biodiversity, and its components loss and gain, are needed at local and geographical scales. However, we lack them because of data in-completeness, heterogeneity, and lack of temporal replication. Hence, we need a tool to integrate heterogeneous data and to account for their incompleteness. 2. We introduce spatiotemporal machine learning interpolation that can estimate cross-scale biodiversity change and its components. The approach naturally captures the expected and complex interactions between scale (grain), geography, data types, and drivers of change. As such it can integrate inventory data from reserves or countries with data from atlases and local survey plots. We present two flavors, both blending tree-based machine learning (random forests, boosted trees) with advances in ecolog-ical scaling: The first combines machine learning with species-area relationships (SAR method), the second with occupancy-area relationships (OAR method). 3. Using simulated data and an empirical example of global mammals and European plants, we show that tree-based machine learning effectively captures temporal biodi-versity change, loss, and gain across a continuum of spatial grains. This can be done despite the lack of time series data (i.e., it does not require temporal replication at sites), temporal biases in the amount of data, and highly uneven sampling area. These estimates can be mapped at any desired spatial resolution. 4. In all, this is a user-friendly and computationally fast approach with minimal require-ments on data format. It can integrate heterogeneous biodiversity data to obtain esti-mates of temporal biodiversity change, loss, and gain, that would otherwise be invisi-ble in the raw data alone.
    Date: 2022–03–15
    URL: http://d.repec.org/n?u=RePEc:osf:ecoevo:rky7b&r=
  6. By: Mukund Sundararajan (Google); Walid Krichene (Google Research)
    Abstract: Machine learning is pervasive. It powers recommender systems such as Spotify, Instagram and YouTube, and health-care systems via models that predict sleep patterns, or the risk of disease. Individuals contribute data to these models and benefit from them. Are these contributions (outflows of influence) and benefits (inflows of influence) reciprocal? We propose measures of outflows, inflows and reciprocity building on previously proposed measures of training data influence. Our initial theoretical and empirical results indicate that under certain distributional assumptions, some classes of models are approximately reciprocal. We conclude with several open directions.
    Date: 2022–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2202.09480&r=
  7. By: Damir Filipović (Ecole Polytechnique Fédérale de Lausanne; Swiss Finance Institute); Markus Pelger (Stanford University - Department of Management Science & Engineering); Ye Ye (Stanford University)
    Abstract: We introduce a robust, flexible and easy-to-implement method for estimating the yield curve from Treasury securities. This method is non-parametric and optimally learns basis functions in reproducing Hilbert spaces with an economically motivated smoothness reward. We provide a closed-form solution of our machine learning estimator as a simple kernel ridge regression, which is straightforward and fast to implement. We show in an extensive empirical study on U.S. Treasury securities, that our method strongly dominates all parametric and non-parametric benchmarks. Our method achieves substantially smaller out-of-sample yield and pricing errors, while being robust to outliers and data selection choices. We attribute the superior performance to the optimal trade-off between flexibility and smoothness, which positions our method as the new standard for yield curve estimation.
    Keywords: yield curve estimation, U.S. Treasury securities, term structure of interest rates, nonparametric method, machine learning in finance, reproducing kernel Hilbert space
    JEL: C14 C38 C55 E43 G12
    Date: 2022–03
    URL: http://d.repec.org/n?u=RePEc:chf:rpseri:rp2224&r=
  8. By: Grant, Allan; Figus, Gioele; Schubert, Torben
    Abstract: Estimating the economic returns to public science investments has been a key topic in economics. However, while in particular microeconomic approaches have been proposed, only a few studies have tried estimating the macroeconomic effects of public science investments. In this paper, we propose a micro-rooted macro-modelling framework, which combines the strength of an econo-metric causal identification of key effects with the power of a Computable General Equilibrium (CGE) framework, and provides additional economic structure of the estimates allowing us a fine-grained sectoral differentiation of all effects. Applying our approach to the German Fraunhofer-Gesellschaft, the world's largest publicly funded organization for applied research, we show that macroeconomic returns are - irrespective of econometric specification - a high multitude of the original investment costs. In specific, the activities by the Fraunhofer-Gesellschaft increase German GDP by 1.6% and employment by 437,000 jobs. Our CGE analysis further shows that the effects concentrate in chem-icals, pharmaceuticals, motor vehicles and machinery sectors. The substantial size of our estimated effects corroborate recent macroeconomic evidence on the social returns to innovation.
    Keywords: macroeconomic Effects of public Research,Fraunhofer-Gesellschaft,Regression-microfounded CGE-model
    Date: 2022
    URL: http://d.repec.org/n?u=RePEc:zbw:fisidp:72&r=
  9. By: Zimmermann, Florian; Keles, Dogan
    Abstract: France wants to become carbon-neutral by 2050. Renewable energies and nuclear power are expected to make the main contribution to this goal. However, the average age of nuclear power plants is approaching 37 years of operation in 2022, which is likely to lead to increased outages and expensive maintenance. In addition, newer nuclear power plants are flexible to operate and thus compatible with high volatile feed-in from renewables. Nevertheless, it is controversially discussed whether nuclear power plants can still be operated competitively and whether new investments will be made in this technology. Using an agent-based simulation model of the European electricity market, the market impacts of possible nuclear investments are investigated based on two scenarios: a scenario with state-based investments and a scenario with market-based investments. The results of this investigation show that under our assumptions, even with state-based investments, carbon neutrality would not be achieved with the estimated nuclear power plant capacity. Under purely market-based assumptions, large amounts of gas-fired power plants would be installed, which would lead to an increase in France's carbon emissions. State-based investments in nuclear power plants, however, would have a dampening effect on neighboring spot market prices of up to 4.5 % on average.
    Keywords: France,nuclear,electricity market,capacity remuneration mechanism,cross-border effect,investment
    Date: 2022
    URL: http://d.repec.org/n?u=RePEc:zbw:kitiip:64&r=
  10. By: Matthias Lalisse (Johns Hopkins University)
    Abstract: How much does money drive legislative outcomes in the United States? In this article, we use aggregated campaign finance data as well as a Transformer based text embedding model to predict roll call votes for legislation in the US Congress with more than 90% accuracy. In a series of model comparisons in which the input feature sets are varied, we investigate the extent to which campaign finance is predictive of voting behavior in comparison with variables like partisan affiliation. We find that the financial interests backing a legislator's campaigns are independently predictive in both chambers of Congress, but also uncover a sizable asymmetry between the Senate and the House of Representatives. These findings are cross-referenced with a Representational Similarity Analysis (RSA) linking legislators' financial and voting records, in which we show that "legislators who vote together get paid together", again discovering an asymmetry between the House and the Senate in the additional predictive power of campaign finance once party is accounted for. We suggest an explanation of these facts in terms of Thomas Ferguson's Investment Theory of Party Competition: due to a number of structural differences between the House and Senate, but chiefly the lower amortized cost of obtaining individuated influence with Senators, political investors prefer operating on the House using the party as a proxy.
    Keywords: campaign finance, congressional voting, investment theory of party competition, machine learning, Representational Similarity Analysis, political money
    JEL: H10 D72 P16 C45
    Date: 2022–02–22
    URL: http://d.repec.org/n?u=RePEc:thk:wpaper:inetwp178&r=
  11. By: Andrés Fernández Díaz (Facultad de Ciencias Económicas y Empresariales. Universidad Complutense de Madrid.); Benito Rodríguez Mallol (Facultad de Ciencias Económicas y Empresariales. Universidad Complutense de Madrid. Title: Artificial Intelligence: A Reapraisal)
    Abstract: This article begins with a brief reference to the history of the Artificial Intelligence (A.I.) highlighting the great figures of George Boole, Kurt Gödel and Alan Turing. The algebra of the first, the undecidability theorem of the second and the advanced machine of the third marked the fundamental milestones of this evolution. The use of the binary system and the introduction of quantum mechanics is explained, which is a great step forward by being able to count on the advantages of quantum computer and algorithms, along with quantum statistics and other new complementary technologies. Regarding the controversy over whether A.I. can outperform Human Intelligence (H.I.) we conclude at the end of the work that enormous effort made on the way to the present allows us to affirm that Artificial Intelligence constitutes, in a certain sense, an asymptote of the Human Intelligence. Abstract: El artículo comienza con una breve referencia a la historia de la Inteligencia Artificial, (I.A.) en la que destacamos en sus respectivos apartados las eminentes figuras de George Boole, Kurt Gödel y Alan Turing. El álgebra del primero, el teorema de la indecidibilidad del segundo y la avanzada y determinante máquina del tercero marcan los hitos fundamentales de dicha evolución. Se explica el empleo del sistema binario, así como la introducción de la mecánica cuántica, lo que supone un gran paso hacia adelante, al poder contar con las ventajas de las computadoras y los algoritmos cuánticos, junto a la estadística cuántica y otras nuevas tecnologías complementarias. Respecto a la controversia sobre si la I.A. puede superar a la Inteligencia humana, tras un último epígrafe dedicado a este tema, concluimos afirmando que el enorme esfuerzo realizado en el camino recorrido hasta la actualidad permite afirmar que la Inteligencia Artificial constituye una especie de “asíntota” de la Inteligencia Humana.
    Keywords: A.I., Turing, Computadoras, Algoritmos, Cuánticos, H.I. Length: 43 pages
    Date: 2022
    URL: http://d.repec.org/n?u=RePEc:ucm:doctra:22-01&r=
  12. By: Luca Coraggio (Università di Napoli Federico II); Marco Pagano (University of Naples Federico II, CSEF and EIEF.); Annalisa Scognamiglio (Università di Napoli Federico II and CSEF); Joacim Tåg (Research Institute of Industrial Economics (IFN))
    Abstract: Does the matching between workers and jobs help explain productivity differentials across firms? To address this question we develop a job-worker allocation quality measure (JAQ) by combining employer-employee administrative data with machine learning techniques. The proposed measure is positively and significantly associated with labor earnings over workers’ careers. At firm level, it features a robust positive correlation with firm productivity, and with managerial turnover leading to an improvement in the quality and experience of management. JAQ can be constructed for any employer-employee data including workers’ occupations, and used to explore the effect of corporate restructuring on workers’ allocation and careers.
    Keywords: jobs, workers, matching, mismatch, machine learning, productivity, management.
    JEL: D22 D23 D24 G34 J24 J31 J62 L22 L23 M12 M54
    Date: 2022–03–30
    URL: http://d.repec.org/n?u=RePEc:sef:csefwp:641&r=
  13. By: Alexander Lipton; Artur Sepp
    Abstract: We combine the one-dimensional Monte Carlo simulation and the semi-analytical one-dimensional heat potential method to design an efficient technique for pricing barrier options on assets with correlated stochastic volatility. Our approach to barrier options valuation utilizes two loops. First we run the outer loop by generating volatility paths via the Monte Carlo method. Second, we condition the price dynamics on a given volatility path and apply the method of heat potentials to solve the conditional problem in closed-form in the inner loop. We illustrate the accuracy and efficacy of our semi-analytical approach by comparing it with the two-dimensional Monte Carlo simulation and a hybrid method, which combines the finite-difference technique for the inner loop and the Monte Carlo simulation for the outer loop. We apply our method for computation of state probabilities (Green function), survival probabilities, and values of call options with barriers. Our approach provides better accuracy and is orders of magnitude faster than the existing methods. s a by-product of our analysis, we generalize Willard's (1997) conditioning formula for valuation of path-independent options to path-dependent options and derive a novel expression for the joint probability density for the value of drifted Brownian motion and its running minimum.
    Date: 2022–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2202.07849&r=
  14. By: Lugo, Josefrank Pernalete; Rossel, Ysaelen Josefina Odor
    Abstract: The study of the accounts of national quarterly, on the basis of historical 2007-2020, with values to prices constants of 2007 to the Republic of Peru; accurate the value in the volume of money from Exports and Imports, and the association with the Product Domestic Gross, GDP. The analysis cluster of expectation-maximization, estimated in millions of suns 123,173 to 141,029 the Product Domestic Gross, the Exports between 32,664 and 38,717 millions of soles; and for Imports between 34,274 and 38,682 millions of soles, with variance explained in average of 85.76%. The artificial neural network Kohonen of learning not monitored has been trained in such a way that each unit has learned to specialize in different regions of the space of input. In this sense, for each 20,000 million of soles of variation in the Product Domestic Gross, the exports vary in 4,000 million of soles, and the Imports at 6,000 million soles. In short, the model proposed allows to assess the ability productive in the country, both in property as of services in the period of recovery economic post Covid-19
    Date: 2021–09–24
    URL: http://d.repec.org/n?u=RePEc:osf:osfxxx:f93zu&r=
  15. By: Shi, Chengchun; Wang, Xiaoyu; Luo, Shikai; Zhu, Hongtu; Ye, Jieping; Song, Rui
    Abstract: A/B testing, or online experiment is a standard business strategy to compare a new product with an old one in pharmaceutical, technological, and traditional industries. Major challenges arise in online experiments of two-sided marketplace platforms (e.g., Uber) where there is only one unit that receives a sequence of treatments over time. In those experiments, the treatment at a given time impacts current outcome as well as future outcomes. The aim of this paper is to introduce a reinforcement learning frame- work for carrying A/B testing in these experiments, while characterizing the long-term treatment effects. Our proposed testing procedure allows for sequential monitoring and online updating. It is generally applicable to a variety of treatment designs in different industries. In addition, we systematically investigate the theoretical properties (e.g., size and power) of our testing procedure. Finally, we apply our framework to both simulated data and a real-world data example obtained from a technological company to illustrate its advantage over the current practice. A Python implementation of our test is available at https://github.com/callmespring/CausalRL .
    Keywords: A/B testing; online experiment; reinforcement learning; causal inference; sequential testing; online updating; Research Support Fund; NSF-DMS-1555244; NSF-DMS-2113637; T&F deal
    JEL: C1
    Date: 2022–01–20
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:113310&r=
  16. By: P\'eter Bir\'o; Gergely Cs\'aji
    Abstract: In a multiple partners matching problem the agents can have multiple partners up to their capacities. In this paper we consider both the two-sided many-to-many stable matching problem and the one-sided stable fixtures problem under lexicographic preferences. We study strong core and Pareto-optimal solutions for this setting from a computational point of view. First we provide an example to show that the strong core can be empty even under these severe restrictions for many-to-many problems, and that deciding the non-emptiness of the strong core is NP-hard. We also show that for a given matching checking Pareto-optimality and the strong core properties are co-NP-complete problems for the many-to-many problem, and deciding the existence of a complete Pareto-optimal matching is also NP-hard for the fixtures problem. On the positive side, we give efficient algorithms for finding a near feasible strong core solution, where the capacities are only violated by at most one unit for each agent, and also for finding a half-matching in the strong core of fractional matchings. These polynomial time algorithms are based on the Top Trading Cycle algorithm. Finally, we also show that finding a maximum size matching that is Pareto-optimal can be done efficiently for many-to-many problems, which is in contrast with the hardness result for the fixtures problem.
    Date: 2022–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2202.05484&r=
  17. By: Radek Šauer
    Abstract: This documentation concisely describes the dynamic stochastic general-equilibrium model that the ifo Institute currently uses for simulations and business-cycle analysis. The model consists of three countries and contains a wide range of rigidities. The model is regularly estimated by quarterly macroeconomic data.
    Keywords: DSGE, simulations, forecasting, business-cycle analysis
    JEL: E17
    Date: 2022
    URL: http://d.repec.org/n?u=RePEc:ces:ifowps:_366&r=
  18. By: Müller, Henrik; Schmidt, Tobias; Rieger, Jonas; Hufnagel, Lena Marie; Hornig, Nico
    Abstract: In this paper, we present a new indicator to measure the media coverage of inflation. Our Inflation Perception Indicator (IPI) for Germany is based on a corpus of three million articles published by broadsheet newspapers between January 2001 and February 2022. It is designed to detect thematic trends, thereby providing new insights into the dynamics of inflation perception over time. These results may prove particularly valuable at the current juncture, where massive uncertainty prevails due to geopolitical conflicts and the pandemic-related supply-chain jitters. Economists inspired by Shiller (2017; 2020) have called for analyses of economic narratives to complement econometric analyses. The IPI operationalizes such an approach by isolating inflation narratives circulating in the media. Methodically, the IPI makes use of RollingLDA (Rieger et al. 2021), a dynamic topic modeling approach refining the rather static original LDA (Blei et al. 2003) to allow for changes in the model's structure over time. By modeling the process of collective memory, where experiences of the past are partly overwritten and altered by new ones and partly sink into oblivion, RollingLDA is a potent tool to capture the evolution of economic narratives as social phenomena. In addition, it is suitable to produce stable time-series, to the effect that the IPI can be updated frequently. Our initial results show a narrative landscape in turmoil. Never in the past two decades has there been such a broad shift in inflation perception, and therefore, possibly, in inflation expectations. Also, second-round effects, such as significant wage demands, that have not played a major role in Germany for a long time, seem to be in the making. Towards the end of the time horizon, raw material prices are high on the agenda, too, triggered by the Russian war against Ukraine and the ensuing sanctions against the aggressor. We would like to encourage researchers to use our data and are happy to share it on request.
    Keywords: Inflation,Expectations,Narratives,Latent Dirichlet Allocation,Covid-19,Text Mining,Computational Methods,Behavioral Economics
    Date: 2022
    URL: http://d.repec.org/n?u=RePEc:zbw:docmaw:9&r=

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.