nep-cmp New Economics Papers
on Computational Economics
Issue of 2021‒05‒03
25 papers chosen by



  1. Optimal Stopping via Randomized Neural Networks By Calypso Herrera; Florian Krach; Pierre Ruyssen; Josef Teichmann
  2. Modeling Managerial Search Behavior based on Simon's Concept of Satisficing By Friederike Wall
  3. An investigation into modelling approaches for industrial symbiosis: a literature review By Demartini, Melissa; Bertani, Filippo; Tonelli, Flavio; Raberto, Marco; Cincotti, Silvano
  4. K-expectiles clustering By Wang, Bingling; Li, Yingxing; Härdle, Wolfgang
  5. Addressing Sample Selection Bias for Machine Learning Methods By Dylan Brewer; Alyssa Carlson
  6. Applying Convolutional Neural Networks for Stock Market Trends Identification By Ekaterina Zolotareva
  7. Stochastic Gradient Variational Bayes and Normalizing Flows for Estimating Macroeconomic Models By Ramis Khbaibullin; Sergei Seleznev
  8. Algorithm is Experiment: Machine Learning, Market Design, and Policy Eligibility Rules By Yusuke Narita; Kohei Yata
  9. Computational Performance of Deep Reinforcement Learning to find Nash Equilibria By Christoph Graf; Viktor Zobernig; Johannes Schmidt; Claude Kl\"ockl
  10. Distinguish the indistinguishable: a Deep Reinforcement Learning approach for volatility targeting models By Eric Benhamou; David Saltiel; Serge Tabachnik; Sui Kai Wong; François Chareyron
  11. CATE meets ML: Conditional average treatment effect and machine learning By Jacob, Daniel
  12. Polynomial chaos expansion: Efficient evaluation and estimation of computational models By Daniel Fehrle; Christopher Heiberger
  13. Control and Spread of Contagion in Networks By John Higgins; Tarun Sabarwal
  14. Optimal Targeting in Fundraising: A Machine-Learning Approach By Tobias Cagala; Ulrich Glogowsky; Johannes Rincke; Anthony Strittmatter
  15. Constructing long-short stock portfolio with a new listwise learn-to-rank algorithm By Xin Zhang; Lan Wu; Zhixue Chen
  16. Sparse Grid Method for Highly Efficient Computation of Exposures for xVA By Lech A. Grzelak
  17. Modelling Universal Basic Income using UKMOD By De Henau, Jerome; Himmelweit, Susan; Reis, Sara
  18. Prediction of Food Production Using Machine Learning Algorithms of Multilayer Perceptron and ANFIS By Saeed Nosratabadi; Sina Ardabili; Zoltan Lakner; Csaba Mako; Amir Mosavi
  19. Breaking Bad: Supply Chain Disruptions in a Streamlined Agent Based Model By Domenico Delli Gatti; Elisa Grugni
  20. Daily news sentiment and monthly surveys: A mixed–frequency dynamic factor model for nowcasting consumer confidence By Andres Algaba; Samuel Borms; Kris Boudt; Brecht Verbeken
  21. Form 10-Q Itemization By Yanci Zhang; Tianming Du; Yujie Sun; Lawrence Donohue; Rui Dai
  22. Rational vs. irrational beliefs in a complex world By Böhl, Gregor; Hommes, Cars H.
  23. Extending the Heston Model to Forecast Motor Vehicle Collision Rates By Darren Shannon; Grigorios Fountas
  24. Financial Contagion During the Covid-19 Pandemic: A Wavelet-Copula-GARCH Approach. By Alqaralleh, Huthaifa; Canepa, Alessandra; Chini, Zanetti
  25. Re-investigating the oil-food price co-movement using wavelet analysis By Loretta Mastroeni; Greta Quaresima; Pierluigi Vellucci

  1. By: Calypso Herrera; Florian Krach; Pierre Ruyssen; Josef Teichmann
    Abstract: This paper presents new machine learning approaches to approximate the solution of optimal stopping problems. The key idea of these methods is to use neural networks, where the hidden layers are generated randomly and only the last layer is trained, in order to approximate the continuation value. Our approaches are applicable for high dimensional problems where the existing approaches become increasingly impractical. In addition, since our approaches can be optimized using a simple linear regression, they are very easy to implement and theoretical guarantees can be provided. In Markovian examples our randomized reinforcement learning approach and in non-Markovian examples our randomized recurrent neural network approach outperform the state-of-the-art and other relevant machine learning approaches.
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2104.13669&r=
  2. By: Friederike Wall
    Abstract: Computational models of managerial search often build on backward-looking search based on hill-climbing algorithms. Regardless of its prevalence, there is some evidence that this family of algorithms does not universally represent managers' search behavior. Against this background, the paper proposes an alternative algorithm that captures key elements of Simon's concept of satisficing which received considerable support in behavioral experiments. The paper contrasts the satisficing-based algorithm to two variants of hill-climbing search in an agent-based model of a simple decision-making organization. The model builds on the framework of NK fitness landscapes which allows controlling for the complexity of the decision problem to be solved. The results suggest that the model's behavior may remarkably differ depending on whether satisficing or hill-climbing serves as an algorithmic representation for decision-makers' search. Moreover, with the satisficing algorithm, results indicate oscillating aspiration levels, even to the negative, and intense - and potentially destabilizing - search activities when intra-organizational complexity increases. Findings may shed some new light on prior computational models of decision-making in organizations and point to avenues for future research.
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2104.14002&r=
  3. By: Demartini, Melissa; Bertani, Filippo; Tonelli, Flavio; Raberto, Marco; Cincotti, Silvano
    Abstract: The aim of this paper is to understand how to model industrial symbiosis networks in order to favour its implementation and provide a framework to guide companies and policy makers towards it. Industrial symbiosis is a clear example of complex adaptive systems and traditional approaches (i.e., Input/Output analysis, Material flow analysis) are not capable to capture these dynamics behaviours. Therefore, the aim of this literature review is to investigate: i) the most used modelling and simulation approaches to analyse industrial symbiosis and ii) their characteristics in terms of simulation methods, interaction mechanisms and simulations software. Findings from our research suggest that a hybrid modelling and simulation approach, based on agent-based and system dynamics, could be an appropriate method for industrial symbiosis analysis and design.
    Keywords: industrial symbiosis, hybrid modelling and simulation approach, literature review, system dynamics, agent based modelling
    JEL: C63
    Date: 2021–04–27
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:107448&r=
  4. By: Wang, Bingling; Li, Yingxing; Härdle, Wolfgang
    Abstract: K-means clustering is one of the most widely-used partitioning algorithm in cluster analysis due to its simplicity and computational efficiency, but it may not provide ideal clustering results when applying to data with non-spherically shaped clusters. By considering the asymmetrically weighted distance, We propose the K-expectile clustering and search the clusters via a greedy algorithm that minimizes the within cluster τ -variance. We provide algorithms based on two schemes: the fixed τ clustering, and the adaptive τ clustering. Validated by simulation results, our method has enhanced performance on data with asymmetric shaped clusters or clusters with a complicated structure. Applications of our method show that the fixed τ clustering can bring some flexibility on segmentation with a decent accuracy, while the adaptive τ clustering may yield better performance.
    Keywords: clustering,expectiles,asymmetric quadratic loss,image segmentation
    JEL: C00
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:zbw:irtgdp:2021003&r=
  5. By: Dylan Brewer (School of Economics, Georgia Institute of Technology); Alyssa Carlson (Department of Economics, University of Missouri)
    Abstract: We study approaches for adjusting machine learning methods when the training sample differs from the prediction sample on unobserved dimensions. The machine learning literature predominately assumes selection only on observed dimensions. Common suggestions are to re-weight or control for variables that influence selection as solutions to selection on observables. Simulation results indicate that common machine learning practices such as re-weighting or controlling for variables that influence selection into the training or testing sample often worsens sample selection bias. We suggest two control-function approaches that remove the effects of selection bias before training and find that they reduce meansquared prediction error in simulations with a high degree of selection. We apply these approaches to predicting the vote share of the incumbent in gubernatorial elections using previously observed re-election bids. We find that ignoring selection on unobservables leads to substantially higher predicted vote shares for the incumbent than when the control function approach is used.
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:umc:wpaper:2102&r=
  6. By: Ekaterina Zolotareva
    Abstract: In this paper we apply a specific type ANNs - convolutional neural networks (CNNs) - to the problem of finding start and endpoints of trends, which are the optimal points for entering and leaving the market. We aim to explore long-term trends, which last several months, not days. The key distinction of our model is that its labels are fully based on expert opinion data. Despite the various models based solely on stock price data, some market experts still argue that traders are able to see hidden opportunities. The labelling was done via the GUI interface, which means that the experts worked directly with images, not numerical data. This fact makes CNN the natural choice of algorithm. The proposed framework requires the sequential interaction of three CNN submodels, which identify the presence of a changepoint in a window, locate it and finally recognize the type of new tendency - upward, downward or flat. These submodels have certain pitfalls, therefore the calibration of their hyperparameters is the main direction of further research. The research addresses such issues as imbalanced datasets and contradicting labels, as well as the need for specific quality metrics to keep up with practical applicability. This paper is the full text of the research, presented at the 20th International Conference on Artificial Intelligence and Soft Computing Web System (ICAISC 2021)
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2104.13948&r=
  7. By: Ramis Khbaibullin (Bank of Russia, Russian Federation); Sergei Seleznev (Bank of Russia, Russian Federation)
    Abstract: We illustrate the ability of the stochastic gradient variational Bayes algorithm, which is a very popular machine learning tool, to work with macrodata and macromodels. Choosing two approximations (mean-field and normalizing flows), we test properties of algorithms for a set of models and show that these models can be estimated fast despite the presence of estimated hyperparameters. Finally, we discuss the difficulties and possible directions of further research.
    Keywords: Stochastic gradient variational Bayes, normalizing flows, mean-field approximation, sparse Bayesian learning, BVAR, Bayesian neural network, DFM.
    JEL: C11 C32 C32 C45 E17
    Date: 2020–10
    URL: http://d.repec.org/n?u=RePEc:bkr:wpaper:wps61&r=
  8. By: Yusuke Narita; Kohei Yata
    Abstract: Algorithms produce a growing portion of decisions and recommendations both in policy and business. Such algorithmic decisions are natural experiments (conditionally quasi-randomly assigned instruments) since the algorithms make decisions based only on observable input variables. We use this observation to develop a treatment-effect estimator for a class of stochastic and deterministic algorithms. Our estimator is shown to be consistent and asymptotically normal for well-defined causal effects. A key special case of our estimator is a high-dimensional regression discontinuity design. The proofs use tools from differential geometry and geometric measure theory, which may be of independent interest. The practical performance of our method is first demonstrated in a high-dimensional simulation resembling decision-making by machine learning algorithms. Our estimator has smaller mean squared errors compared to alternative estimators. We finally apply our estimator to evaluate the effect of Coronavirus Aid, Relief, and Economic Security (CARES) Act, where more than \$10 billion worth of relief funding is allocated to hospitals via an algorithmic rule. The estimates suggest that the relief funding has little effects on COVID-19-related hospital activity levels. Naive OLS and IV estimates exhibit substantial selection bias.
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2104.12909&r=
  9. By: Christoph Graf; Viktor Zobernig; Johannes Schmidt; Claude Kl\"ockl
    Abstract: We test the performance of deep deterministic policy gradient (DDPG), a deep reinforcement learning algorithm, able to handle continuous state and action spaces, to learn Nash equilibria in a setting where firms compete in prices. These algorithms are typically considered model-free because they do not require transition probability functions (as in e.g., Markov games) or predefined functional forms. Despite being model-free, a large set of parameters are utilized in various steps of the algorithm. These are e.g., learning rates, memory buffers, state-space dimensioning, normalizations, or noise decay rates and the purpose of this work is to systematically test the effect of these parameter configurations on convergence to the analytically derived Bertrand equilibrium. We find parameter choices that can reach convergence rates of up to 99%. The reliable convergence may make the method a useful tool to study strategic behavior of firms even in more complex settings. Keywords: Bertrand Equilibrium, Competition in Uniform Price Auctions, Deep Deterministic Policy Gradient Algorithm, Parameter Sensitivity Analysis
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2104.12895&r=
  10. By: Eric Benhamou (LAMSADE - Laboratoire d'analyse et modélisation de systèmes pour l'aide à la décision - Université Paris Dauphine-PSL - PSL - Université Paris sciences et lettres - CNRS - Centre National de la Recherche Scientifique); David Saltiel; Serge Tabachnik; Sui Kai Wong; François Chareyron
    Abstract: Can an agent efficiently learn to distinguish extremely similar financial models in an environment dominated by noise and regime changes? Standard statistical methods based on averaging or ranking models fail precisely because of regime changes and noisy environments. Additional contextual information in Deep Reinforcement Learning (DRL), helps training an agent distinguish different financial models whose time series are very similar. Our contributions are four-fold: (i) we combine model-based and modelfree Reinforcement Learning (RL). The last model-free RL allows us selecting the different models, (ii) we present a concept, called "walk-forward analysis", which is defined by successive training and testing based on expanding periods, to assert the robustness of the resulting agent, (iii) we present a method based on the importance of features that looks like the one in gradient boosting methods and is based on features sensitivities, (iv) last but not least, we introduce the concept of statistical difference significance based on a two-tailed T-test, to highlight the ways in which our models differ from more traditional ones. Our experimental results show that our approach outperforms the benchmarks in almost all evaluation metrics commonly used in financial mathematics, namely net performance, Sharpe ratio, Sortino, maximum drawdown, maximum drawdown over volatility.
    Keywords: Features sensitivity,Walk forward,Portfolio allocation,Model-free,Model-based,Deep Reinforcement learning
    Date: 2021–04–21
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:hal-03202431&r=
  11. By: Jacob, Daniel
    Abstract: For treatment effects - one of the core issues in modern econometric analysis - prediction and estimation are flip-sides of the same coin. As it turns out, machine learning methods are the tool for generalized prediction models. Combined with econometric theory allows us to estimate not only the average but a personalized treatment effect - the conditional average treatment effect (CATE). In this tutorial, we give an overview of novel methods, explain them in detail, and apply them via Quantlets in real data applications. We study the effect that microcredit availability has on the amount of money borrowed and if the 401(k) pension plan eligibility has an impact on net financial assets, as two empirical examples. The presented toolbox of methods contains metalearners, like the Doubly-Robust, the R-, T- and X-learner, and methods that are specially designed to estimate the CATE like the causal BART and the generalized random forest. In both, the microcredit and the 401(k) example, we find a positive treatment effect for all observations but diverse evidence of treatment effect heterogeneity. An additional simulation study, where the true treatment effect is known, allows us to compare the different methods and to observe patterns and similarities.
    Keywords: Causal Inference,CATE,Machine Learning,Tutorial
    JEL: C00
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:zbw:irtgdp:2021005&r=
  12. By: Daniel Fehrle; Christopher Heiberger
    Abstract: Polynomial chaos expansion (PCE) provides a method that enables the user to represent a quantity of interest (QoI) of a model’s solution as a series expansion of uncertain model inputs, usually its parameters. Among the QoIs are the policy function, the second moments of observables, or the posterior kernel. Hence, PCE sidesteps the repeated and time consuming evaluations of the model’s outcomes. The paper discusses the suitability of PCE for computational economics. We, therefore, introducetothetheorybehindPCE, analyzetheconvergencebehaviorfordifferent elements of the solution of the standard real business cycle model as illustrative example, and check the accuracy, if standard empirical methods are applied. The results are promising, both in terms of accuracy and efï¬ ciency.
    Keywords: Polynomial Chaos Expansion, parameter inference, parameter uncertainty, solution methods
    JEL: C11 C13 C32 C63
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:bav:wpaper:202_fehrleheibergerhuber&r=
  13. By: John Higgins (Department of Economics, University of Kansas, Lawrence, KS 66045, USA); Tarun Sabarwal (Department of Economics, University of Kansas, Lawrence, KS 66045, USA)
    Abstract: We study proliferation of an action in a network coordination game that is generalized to include a tractable, model-based measure of virality to make it more realistic. We present new algorithms to compute contagion thresholds and equilibrium depth of contagion and prove their theoretical properties. These algorithms apply to arbitrary connected networks and starting sets, both with and without virality. Our algorithms are easy to implement and help to quantify relationships previously inaccessible due to computational intractability. Using these algorithms, we study the spread of contagion in scale-free networks with 1,000 players using millions of Monte Carlo simulations. Our results highlight channels through which contagion may spread in networks. Small starting sets lead to greater depth of contagion in less connected networks. Virality amplifies the effect of a larger starting set and may make full network contagion inevitable in cases where it would not occur otherwise. It also brings contagion dynamics closer to a type of singularity. Our model and analysis can be used to understand potential consequences of policies designed to control or spread contagion in networks.
    JEL: C62 C72
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:kan:wpaper:202111&r=
  14. By: Tobias Cagala; Ulrich Glogowsky; Johannes Rincke; Anthony Strittmatter
    Abstract: Ineffective fundraising lowers the resources charities can use for goods provision. We combine a field experiment and a causal machine-learning approach to increase a charity’s fundraising effectiveness. The approach optimally targets fundraising to individuals whose expected donations exceed solicitation costs. Among past donors, optimal targeting substantially increases donations (net of fundraising costs) relative to bench-marks that target everybody or no one. Instead, individuals who were previously asked but never donated should not be targeted. Further, the charity requires only publicly available geospatial information to realize the gains from targeting. We conclude that charities not engaging in optimal targeting waste resources.
    Keywords: fundraising, charitable giving, gift exchange, targeting, optimal policy learning, individualized treatment rules
    JEL: C93 D64 H41 L31 C21
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:ces:ceswps:_9037&r=
  15. By: Xin Zhang; Lan Wu; Zhixue Chen
    Abstract: Factor strategies have gained growing popularity in industry with the fast development of machine learning. Usually, multi-factors are fed to an algorithm for some cross-sectional return predictions, which are further used to construct a long-short portfolio. Instead of predicting the value of the stock return, emerging studies predict a ranked stock list using the mature learn-to-rank technology. In this study, we propose a new listwise learn-to-rank loss function which aims to emphasize both the top and the bottom of a rank list. Our loss function, motivated by the long-short strategy, is endogenously shift-invariant and can be viewed as a direct generalization of ListMLE. Under different transformation functions, our loss can lead to consistency with binary classification loss or permutation level 0-1 loss. A probabilistic explanation for our model is also given as a generalized Plackett-Luce model. Based on a dataset of 68 factors in China A-share market from 2006 to 2019, our empirical study has demonstrated the strength of our method which achieves an out-of-sample annual return of 38% with the Sharpe ratio being 2.
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2104.12484&r=
  16. By: Lech A. Grzelak
    Abstract: Every x-adjustment in the so-called xVA financial risk management framework relies on the computation of exposures. Considering thousands of Monte Carlo paths and tens of simulation steps, a financial portfolio needs to be evaluated numerous times during the lifetime of the underlying assets. This is the bottleneck of every simulation of xVA. In this article, we explore numerical techniques for improving the simulation of exposures. We aim to decimate the number of portfolio evaluations, particularly for large portfolios involving multiple, correlated risk factors. The usage of the Stochastic Collocation (SC) method, together with Smolyaks sparse grid extension, allows for a significant reduction in the number of portfolio evaluations, even when dealing with many risk factors. The proposed model can be easily applied to any portfolio and size. We report that for a realistic portfolio comprising linear derivatives, the expected reduction in the portfolio evaluations may exceed 6000 times, depending on the dimensionality and the required accuracy. We give illustrative examples and examine the method with realistic multi-currency portfolios.
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2104.14319&r=
  17. By: De Henau, Jerome; Himmelweit, Susan; Reis, Sara
    Abstract: This paper focuses on the possibilities and functionalities offered by the tax-benefit microsimulation model for the UK – UKMOD to simulate and analyse the distributional impact of three examples of Basic Income schemes. We show how to build in functionalities to ensure fiscal neutrality and we aim to highlight some of the trade-offs that implementing a Basic Income scheme brings. This paper doesn’t intend to engage with arguments around the desirability of introducing Basic Income but rather focuses on the options that UKMOD offers, giving a couple of examples of schemes based on a set of criteria described below.
    Date: 2021–04–22
    URL: http://d.repec.org/n?u=RePEc:ese:emodwp:em5-21&r=
  18. By: Saeed Nosratabadi; Sina Ardabili; Zoltan Lakner; Csaba Mako; Amir Mosavi
    Abstract: Advancing models for accurate estimation of food production is essential for policymaking and managing national plans of action for food security. This research proposes two machine learning models for the prediction of food production. The adaptive network-based fuzzy inference system (ANFIS) and multilayer perceptron (MLP) methods are used to advance the prediction models. In the present study, two variables of livestock production and agricultural production were considered as the source of food production. Three variables were used to evaluate livestock production, namely livestock yield, live animals, and animal slaughtered, and two variables were used to assess agricultural production, namely agricultural production yields and losses. Iran was selected as the case study of the current study. Therefore, time-series data related to livestock and agricultural productions in Iran from 1961 to 2017 have been collected from the FAOSTAT database. First, 70% of this data was used to train ANFIS and MLP, and the remaining 30% of the data was used to test the models. The results disclosed that the ANFIS model with Generalized bell-shaped (Gbell) built-in membership functions has the lowest error level in predicting food production. The findings of this study provide a suitable tool for policymakers who can use this model and predict the future of food production to provide a proper plan for the future of food security and food supply for the next generations.
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2104.14286&r=
  19. By: Domenico Delli Gatti; Elisa Grugni
    Abstract: We explore the macro-financial consequences of the disruption of a supply chain in an agent based framework characterized by two networks, a credit network connecting banks and firms and a production network connecting upstream and down-stream firms. We consider two scenarios. In the first one, because of the lockdown all the upstream firms are forced to cut production. This generates a sizable down-turn during the lockdown due to the indirect effects of the shock (network based financial accelerator). In the second scenario, only those upstream firms located in the “red zone” are forced to contract production. In this case the recession is milder and the recovery begins earlier. Upstream firms hit by the shock, in fact, will be abandoned by their customers who will switch to suppliers who are located outside the red zone. In this way firms endogenously reconstruct (at least in part) the supply chain after the disruption. This is the main determinant of the mitigated impact of the shock in the “red zone” type of lockdown.
    Keywords: supply chain disruption, agent based macroeconomic model
    JEL: E17 E44 E70
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:ces:ceswps:_9029&r=
  20. By: Andres Algaba (Faculty of Social Sciences and Solvay Business School, Vrije Universiteit Brussel, Pleinlaan 2, 1010 Brussel, Belgium); Samuel Borms (Faculty of Social Sciences and Solvay Business School, Institute of Financial Analysis, University of Neuchâtel, Switzerland.); Kris Boudt (Solvay Business School, Vrije Universiteit Brussel; Department of Economics, Ghent University; School of Business and Economics, Vrije Universiteit Amsterdam); Brecht Verbeken (Faculty of Social Sciences and Solvay Business School.)
    Abstract: Policymakers, firms, and investors closely monitor traditional survey–based consumer confidence indicators and treat it as an important piece of economic information. We propose a latent factor model for the vector of monthly survey–based consumer confidence and daily sentiment embedded in economic media news articles. The proposed mixed– frequency dynamic factor model framework uses a novel covariance matrix specification. Model estimation and real–time filtering of the latent consumer confidence index are computationally simple. In a Monte Carlo simulation study and an empirical application concerning Belgian consumer confidence, we document the economically significant accuracy gains obtained by including daily news sentiment in the dynamic factor model for nowcasting consumer confidence.
    Keywords: dynamic factor model, mixed-frequency, nowcasting, sentiment index, Sentometrics, state space
    JEL: C32 C51 C53 C55
    Date: 2021–02
    URL: http://d.repec.org/n?u=RePEc:nbb:reswpp:202102-396&r=
  21. By: Yanci Zhang; Tianming Du; Yujie Sun; Lawrence Donohue; Rui Dai
    Abstract: Form 10-Q, the quarterly financial statement, is one of the most crucial filings for US public firms to disclose their financial and other relevant business operation information. Due to the gigantic number of 10-Q filings prevailing in the market for each quarter and diverse variations in the implementation of format given company-specific nature, it has long been a problem in the field to provide a generalized way to dissect and retrieve the itemized information. In this paper, we create a tool to itemize 10-Q filings using multi-stage processes, blending a rule-based algorithm with a CNN deep learning model. The implementation is an integrated pipeline which provides a solution to the item retrieval on a large scale. This would enable cross sectional and longitudinal textual analysis on massive number of companies.
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2104.11783&r=
  22. By: Böhl, Gregor; Hommes, Cars H.
    Abstract: Can boundedly rational agents survive competition with fully rational agents? The authors develop a highly nonlinear heterogeneous agents model with rational forward looking versus boundedly rational backward looking agents and evolving market shares depending on their relative performance. Their novel numerical solution method detects equilibrium paths characterized by complex bubble and crash dynamics. Boundedly rational trend-extrapolators amplify small deviations from fundamentals, while rational agents anticipate market crashes after large bubbles and drive prices back close to fundamental value. Overall rational and non-rational beliefs co-evolve over time, with time-varying impact, and their interaction produces complex endogenous bubble and crashes, without any exogenous shocks.
    Keywords: Heterogeneous agents,trend-extrapolation,bubbles,numerical solution method
    JEL: C63 E03 E32 E44 E51
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:zbw:imfswp:156&r=
  23. By: Darren Shannon; Grigorios Fountas
    Abstract: We present an alternative approach to the forecasting of motor vehicle collision rates. We adopt an oft-used tool in mathematical finance, the Heston Stochastic Volatility model, to forecast the short-term and long-term evolution of motor vehicle collision rates. We incorporate a number of extensions to the Heston model to make it fit for modelling motor vehicle collision rates. We incorporate the temporally-unstable and non-deterministic nature of collision rate fluctuations, and introduce a parameter to account for periods of accelerated safety. We also adjust estimates to account for the seasonality of collision patterns. Using these parameters, we perform a short-term forecast of collision rates and explore a number of plausible scenarios using long-term forecasts. The short-term forecast shows a close affinity with realised rates (95% accuracy). The long-term scenarios suggest that modest targets to reduce collision rates (1.83% annually) and targets to reduce the fluctuations of month-to-month collision rates (by half) could have significant benefits for road safety. The median forecast in this scenario suggests a 50% fall in collision rates, with 75% of simulations suggesting that an effective change in collision rates is observed before 2044. The main benefit the model provides is eschewing the necessity for setting unreasonable safety targets that are often missed. Instead, the model presents the effects that modest and achievable targets can have on road safety over the long run, while incorporating random variability. Examining the parameters that underlie expected collision rates will aid policymakers in determining the effectiveness of implemented policies.
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2104.11461&r=
  24. By: Alqaralleh, Huthaifa; Canepa, Alessandra; Chini, Zanetti (University of Turin)
    Abstract: In this study we examine the impact of the Covid-19 pandemic on stock market contagion. Empirical analysis is conducted on six major stock markets using a novel wavelet-copula-GARCH procedure to account for both the time and frequency domain of stock market correlation. We find evidence of contagion in the stock markets under consideration during the Covid-19 pandemic
    Date: 2021–03
    URL: http://d.repec.org/n?u=RePEc:uto:dipeco:202110&r=
  25. By: Loretta Mastroeni; Greta Quaresima; Pierluigi Vellucci
    Abstract: In this article we analyse the oil-food price co-movement and its determinants in both time and frequency domains, using the wavelet analysis approach. Our results show that the significant local correlation between food and oil is only apparent. This is mainly due to the activity of commodity index investments and, to a lower extent, to the increasing demand from emerging economies. Furthermore, we employ the wavelet entropy to assess the predictability of the time series under consideration. We find that some variables share with both food and oil a similar predictability structure. These variables are those that mostly co-move with both oil and food. Some policy implications can be derived from our results, the most relevant being that the activity of commodity index investments is able to increase correlation between food and oil. This activity generates highly integrated markets and an increasing risk of joint price movements which is potentially dangerous in periods of economic downturn and financial stress. In our work we suggest that governments should also provide subsidy packages based on the commodity traded in commodity indices to protect producers and consumers from adverse price movements due to financial activity rather than lack of supply or demand.
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2104.11891&r=

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.